DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation Paper • 2601.22153 • Published 3 days ago • 60
Running 105 The Eiffel Tower Llama 📝 105 Explore the Eiffel Tower Llama experiment with open-source models
Running on Zero MCP Featured 1.68k Z Image Turbo 🏃 1.68k Generate realistic images from text descriptions
Running Featured 101 Supertonic TTS WebGPU ⚡ 101 Blazingly fast text-to-speech 100% locally in your browser
view article Article Transformers v5: Simple model definitions powering the AI ecosystem +2 Dec 1, 2025 • 288
Kandinsky 5.0: A Family of Foundation Models for Image and Video Generation Paper • 2511.14993 • Published Nov 19, 2025 • 230
view article Article Introducing smolagents: simple agents that write actions in code. +1 Dec 31, 2024 • 1.17k
view reply To understand clearly, you upload the Perquet DS (I do need to store it somewhere, and Perquet is optimized on Hub) here on the Hub and use the streaming feature while having a constant net connection, right?
Running Featured 1.28k FineWeb: decanting the web for the finest text data at scale 🍷 1.28k Generate high-quality text data for LLMs using FineWeb
Running on CPU Upgrade Featured 2.95k The Smol Training Playbook 📚 2.95k The secrets to building world-class LLMs
Quantum-PEFT: Ultra parameter-efficient fine-tuning Paper • 2503.05431 • Published Mar 7, 2025 • 1 • 1
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model Paper • 2509.09372 • Published Sep 11, 2025 • 246