We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered.
** Why Choose ClearerVoice-Studio?**
- Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch! - Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training.
- Enhance noisy speech recordings to achieve crystal-clear quality. - Separate speech from complex audio mixtures with ease. - Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from alibabasglab/LJSpeech-1.1-48kHz . - Extract target speaker voices with precision using audio-visual models.
**Join Us in Growing ClearerVoice-Studio!**
We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform.
**Support us by:**
- Starring it on GitHub. - Exploring and contributing to our codebase . - Sharing your feedback and use cases to make the platform even better. - Joining our community discussions to exchange ideas and innovations. - Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
🙋🏻♂️Hey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it 🚀
I'm excited to announce significant improvements to my HF Daily Paper Newsletter Bot! Here are the key updates:
🖼️ Enhanced Poster Generation - Implemented dynamic height adjustment for daily paper posters - Added support for displaying complete paper content without truncation - Improved Chinese font rendering and text layout - Integrated Hugging Face logo for better branding - Enhanced visual aesthetics with optimized card layouts
📝 Content Improvements - Removed paper count limitations (previously capped at 5 papers) - Enhanced title and summary extraction algorithms - Improved text wrapping and spacing for better readability - Added proper handling of long content with automatic layout adjustments
🛠️ Technical Enhancements - Implemented better font loading mechanism with fallback options - Added support for multiple Chinese font paths - Improved error handling and logging - Enhanced memory management for image processing - Added detailed debugging information
🌟 Visual Design Updates - Refined color scheme with HF brand colors - Improved card spacing and padding - Enhanced typography with better font sizing - Added smooth transitions between paper cards - Optimized overall layout for better visual hierarchy
🔧 Infrastructure Updates - Improved GitHub Actions workflow reliability - Enhanced error notification system - Added automatic retries for API calls - Improved logging and debugging capabilities
The bot now generates more professional and visually appealing daily paper summaries while ensuring complete content display. These updates make the newsletter more readable and informative for our users.
Try it out and let me know what you think! Your feedback helps me make continuous improvements to better serve the AI research community.
Can it run DeepSeek V3 671B is the new 'can it run Doom'.
How minimalistic can I go with on device AI with behemoth models - here I'm running DeepSeek V3 MoE on a single A6000 GPU.
Not great, not terrible, for this minimalistic setup. I love the Mixture of Experts architectures. Typically I'm running my core LLM distributed over the 4 GPUs.
Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
We had a few people asking about the differences and methodologies of our addition to the open-image-preferences dataset. So my colleague and I wrote a blog post about it with the new huggingface blog functionality: https://huggingface.co/blog/RapidataAI/more-image-preferences
Major update on the Talking to Chatbots dataset! Expanded the 'wrapped' dataset (one row per chat) to 2.86k records, and the 'unwrapped' version (one row per conversation turn) to 11k records. The main source is my ChatGPT archive with nearly 2 years of chats. It is still a work in progress as I incorporate chats from other sources and qualitative metrics (SCBN) for responses.
🎯Fine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
📣 Looking for labeled, high-quality synthetic audio/TTS data 📣 Have you been or are you currently calling API endpoints from OpenAI, ElevenLabs, etc? Do you have labeled audio data sitting around gathering dust? Let's talk! Join https://discord.gg/QuGxSWBfQy or comment down below.
If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.
What does this mean? If you've been calling closed-source TTS or audio API endpoints to: - Build voice agents - Make long-form audio, like audiobooks or podcasts - Handle customer support, etc Then YOU can contribute to the training mix and get useful artifacts in return. ❤️
All the responses get saved in the cfahlgren1/react-code-instructions dataset. Hopefully we can build one of the biggest, highest quality frontend datasets on the hub 💪
Probably most of you already knows this trick but just in case: 🤔 Unable to connect to Hugging Face Spaces Dev Mode through local Cursor? 💡 Don't worry there is an easy trick!
- right click Connect with VS Code - copy link in your browser - vscode://vscode-remote/... - replace vscode with cursor and go - cursor://vscode-remote/...