Published a stable version of Ukrainian Text-to-Speech library on GitHub and PyPI.
Features:
- Multi-speaker model: 2 female (Tetiana, Lada) + 1 male (Mykyta) voices; - Fine-grained control over speech parameters, including duration, fundamental frequency (F0), and energy; - High-fidelity speech generation using the RAD-TTS++ acoustic model; - Fast vocoding using Vocos; - Synthesizes long sentences effectively; - Supports a sampling rate of 44.1 kHz; - Tested on Linux environments and Windows/WSL; - Python API (requires Python 3.9 or later); - CUDA-enabled for GPU acceleration.
We just published the LlamaIndex unit for the agents course, and it is set to offer a great contrast between the smolagents unit by looking at
- What makes llama-index stand-out - How the LlamaHub is used for integrations - Creating QueryEngine components - Using agents and tools - Agentic and multi-agent workflows
The team has been working flat-out on this for a few weeks. Supported by Logan Markewich and Laurie Voss over at LlamaIndex.
In my recent article “Piercing the Deepest Mathematical Mystery”, I paved the way to proving a famous multi-century old conjecture: are the digits of major mathematical constant such as π, e, log 2, or √2 evenly distributed? No one before ever managed to prove even the most basic trivialities, such as whether the proportion of ‘0’ or ‘1’ exists in the binary expansions of any of these constants, or if it oscillates indefinitely between 0% and 100%.
Here I provide an overview of the new framework built to uncover deep results about the digit distribution of Euler’s number e, discuss the latest developments, share a 10x faster version of the code, and feature new potential research areas in LLMs, AI, quantum dynamics, high performance computing, cryptography, dynamical systems, number theory and more, arising from my discovery. Perhaps the most interesting part is testing LLMs and other AI tools to assess their reasoning capabilities on a fascinating math problem with no solution posted anywhere.