kotol

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

gv-hf's activity

merveย 
posted an update about 19 hours ago
view post
Post
1159
IN: video fine-tuning support for facebook V-JEPA 2 in HF transformers ๐Ÿ”ฅ

it comes with
> four models fine-tuned on Diving48 and SSv2 dataset facebook/v-jepa-2-6841bad8413014e185b497a6
> FastRTC demo on V-JEPA2 SSv2 qubvel-hf/vjepa2-streaming-video-classification
> fine-tuning script on UCF-101 https://gist.github.com/ariG23498/28bccc737c11d1692f6d0ad2a0d7cddb
> fine-tuning notebook on UCF-101 https://colab.research.google.com/drive/16NWUReXTJBRhsN3umqznX4yoZt2I7VGc?usp=sharing
we're looking forward to see what you will build! ๐Ÿค—
merveย 
posted an update 2 days ago
view post
Post
1731
#CVPR2025 Paper Picks #1
VisionZip is a compression technique that reduces number of visual tokens to improve performance AND prefill time for vision language models
demo: Senqiao/VisionZip
paper: VisionZip: Longer is Better but Not Necessary in Vision Language Models (2412.04467)
most of the image tokens are redundant for the LLM, so the authors ask "are all visual tokens necessary?"

the method is simple:
find which tokens have the highest attention score, merge rest of the tokens based on similarity, then merge both

their method is both training-free and for fine-tuning
the authors report 5 point improvement on average of vision language tasks + 8x improvement in prefilling time for Llava-Next 7B and 13B ๐Ÿคฏ

removing redundant tokens improve image token quality too ๐Ÿฅน
merveย 
posted an update 2 days ago
view post
Post
3193
stop writing CUDA kernels yourself

we have launched Kernel Hub: easy optimized kernels for all models on Hugging Face ๐Ÿ”ฅ use them right away!
it's where the community populates optimized kernels ๐Ÿค

this release comes in three parts
> Kernel Hub: contains (as of now) 14 kernels
> kernels: Python library to load kernels from Kernel Hub
> kernel-builder: Nix package to build kernels for PyTorch (made using PyTorch C++ frontend)

when building models, your regular workflow should be pulling kernels from Hub and building your model with them ๐Ÿค—
here's a practical example with RMSNorm:
1. pull the kernel from Hub with get_kernel
2. decorate with use_kernel_forward_from_hub
3. inject it to your model
we'd love to hear your feedback! ๐Ÿ™๐Ÿป
we also welcome kernel contributions by community ๐Ÿฅน๐Ÿ’—

- request kernels here: kernels-community/README#1
- check out this org: kernels-community
- read the blog: https://huggingface.co/blog/hello-hf-kernels
  • 1 reply
ยท
merveย 
posted an update 5 days ago
view post
Post
621
Dolphin: new OCR model by ByteDance with MIT license ๐Ÿฌ

the model first detects element in the layout (table, formula etc) and then parses each element in parallel for generation
Model: ByteDance/Dolphin
Try the demo: ByteDance/Dolphin
merveย 
posted an update 7 days ago
view post
Post
1328
stop building parser pipelines ๐Ÿ‘‹๐Ÿป
there's a new document parser that is small, fast, Apache 2.0 licensed and is better than all the other ones! ๐Ÿ˜ฑ

echo840/MonkeyOCR is a 3B model that can parse everything (charts, formules, tables etc) in a document ๐Ÿค 
> the authors show in the paper that document parsing pipelines often have errors propagating back
> using singular e2e models are better but they're too heavy to use

this model addresses both: it's lighter, faster, stronger ๐Ÿ”ฅ
merveย 
posted an update 7 days ago
view post
Post
1560
Meta just released V-JEPA 2: new open-source image/video world models โฏ๏ธ๐Ÿค— facebook/v-jepa-2-6841bad8413014e185b497a6

> based on ViT, different sizes (L/G/H) and resolution (286/384)
> 0-day support in ๐Ÿค— transformers
> comes with a physical reasoning (from video) benchmark: MVPBench, IntPhys 2, and CausalVQA facebook/physical_reasoning_leaderboard

Read more https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
We will release a fine-tuning notebook with task-specific models in transformers format soon, stay tuned!
merveย 
posted an update 13 days ago
view post
Post
2838
Qwen2.5-Omni is soooo good that people build multimodal reasoning models off of it ๐Ÿฅน
> KE-Team/Ke-Omni-R-3B is open-source audio reasoning model sota on average of benchmarks, based on Qwen/Qwen2.5-Omni-3B ๐Ÿ—ฃ๏ธ
> Haoz0206/Omni-R1 is a video reasoning model with pixel level grounding (see below) and it's super competitive โฏ๏ธ based on Qwen/Qwen2.5-Omni-7B
Xenovaย 
posted an update 14 days ago
view post
Post
3389
NEW: Real-time conversational AI models can now run 100% locally in your browser! ๐Ÿคฏ

๐Ÿ” Privacy by design (no data leaves your device)
๐Ÿ’ฐ Completely free... forever
๐Ÿ“ฆ Zero installation required, just visit a website
โšก๏ธ Blazingly-fast WebGPU-accelerated inference

Try it out: webml-community/conversational-webgpu

For those interested, here's how it works:
- Silero VAD for voice activity detection
- Whisper for speech recognition
- SmolLM2-1.7B for text generation
- Kokoro for text to speech

Powered by Transformers.js and ONNX Runtime Web! ๐Ÿค— I hope you like it!
ยท
ariG23498ย 
posted an update 14 days ago
view post
Post
1370
๐Ÿšจ Implement KV Cache from scratch in pure PyTorch. ๐Ÿšจ

We have documented all of our learning while implementing KV Cache to nanoVLM. Joint work with @kashif @lusxvr @andito @pcuenq

Blog: hf.co/blog/kv-cache
  • 1 reply
ยท
merveย 
posted an update 14 days ago
view post
Post
1510
Past week was insanely packed for open AI! ๐Ÿ˜ฑ
Luckily we picked some highlights for you โค๏ธ lfg!

๐Ÿ’ฌ LLMs/VLMs
> Deepseek ๐Ÿณ released deepseek-ai/DeepSeek-R1-0528, 38B model, only 0.2 and 1.4 points behind o3 in AIME 24/25 ๐Ÿคฏ they also released an 8B distilled version based on Qwen3 (OS) deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d
> Xiaomi released MiMo-7B-RL (LLM for code and math) and MiMo-VL-7B-RL (VLM for visual reasoning, GUI agentic task and general use) (OS) ๐Ÿ˜ XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212
> NVIDIA released , new reasoning model nvidia/Nemotron-Research-Reasoning-Qwen-1.5B
> DS: MiniMax released https://huggingface.co/MiniMaxAI/SynLogic, new 49k logical reasoning examples across 35 tasks including solving cipher, sudoku and more!

๐Ÿ–ผ๏ธ Image/Video Generation
> tencent released tencent/HunyuanPortrait, a new model for consistent portrait generation with SVD Research license. They also released tencent/HunyuanVideo-Avatar, audio driven avatar generation (OS)
> showlab released showlab/OmniConsistency, consistent stylization model (OS)
> Rapidata/text-2-video-human-preferences-veo3 is a new T2V preference dataset based on videos from Veo3 with 46k examples (OS)

Audio๐Ÿ—ฃ๏ธ
> https://huggingface.co/ResembleAI/Chatterbox is a new 500M text-to-speech model preferred more than ElevenLabs (OS) ๐Ÿ˜
> PlayHT/PlayDiffusion is a new speech editing model (OS)

Other
> https://huggingface.co/NX-AI/TiReX is a new time series foundation model
> Yandex released a huge (4.79B examples!) video recommendation dataset https://huggingface.co/yandex/yambda

OS ones have Apache2.0 or MIT licenses, find more models and datasets here merve/releases-30-may-6840097345e0b1e915bff843
merveย 
posted an update 14 days ago
view post
Post
1393
Yesterday was the day of vision language action models (VLAs)!

> SmolVLA: open-source small VLA for robotics by Hugging Face LeRobot team ๐Ÿค–
Blog: https://huggingface.co/blog/smolvla
Model: lerobot/smolvla_base

> Holo-1: 3B & 7B web/computer use agentic VLAs by H Company ๐Ÿ’ป
Model family: Hcompany/holo1-683dd1eece7eb077b96d0cbd
Demo: https://huggingface.co/spaces/multimodalart/Holo1
Blog: https://huggingface.co/blog/Hcompany/holo1
super exciting times!!
merveย 
posted an update 15 days ago
merveย 
posted an update 16 days ago
merveย 
posted an update 17 days ago
view post
Post
1137
New GUI model by Salesforce AI & Uni HK: Jedi
tianbaoxiexxx/Jedi xlangai/Jedi-7B-1080p ๐Ÿค—
Based on Qwen2.5-VL with Apache 2.0 license

prompt with below screenshot โ†’ select "find more"
  • 3 replies
ยท
merveย 
posted an update 19 days ago
view post
Post
1971
HOT: MiMo-VL new 7B vision LMs by Xiaomi surpassing gpt-4o (Mar), competitive in GUI agentic + reasoning tasks โค๏ธโ€๐Ÿ”ฅ XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212

not only that, but also MIT license & usable with transformers ๐Ÿ”ฅ
merveย 
posted an update 20 days ago
view post
Post
2714
introducing: VLM vibe eval ๐Ÿชญ visionLMsftw/VLMVibeEval

vision LMs are saturated over benchmarks, so we built vibe eval ๐Ÿ’ฌ

> compare different models with refreshed in-the-wild examples in different categories ๐Ÿค 
> submit your favorite model for eval
no numbers -- just vibes!
merveย 
posted an update 22 days ago
view post
Post
2548
emerging trend: models that can understand image + text and generate image + text

don't miss out โคต๏ธ
> MMaDA: single 8B diffusion model aligned with CoT (reasoning!) + UniGRPO Gen-Verse/MMaDA
> BAGEL: 7B MoT model based on Qwen2.5, SigLIP-so-400M, Flux VAE ByteDance-Seed/BAGEL
both by ByteDance! ๐Ÿ˜ฑ

I keep track of all any input โ†’ any output models here https://huggingface.co/collections/merve/any-to-any-models-6822042ee8eb7fb5e38f9b62
  • 1 reply
ยท
merveย 
posted an update 23 days ago
view post
Post
3127
what happened in open AI past week? so many vision LM & omni releases ๐Ÿ”ฅ merve/releases-23-may-68343cb970bbc359f9b5fb05

multimodal ๐Ÿ’ฌ๐Ÿ–ผ๏ธ
> new moondream (VLM) is out: it's 4-bit quantized (with QAT) version of moondream-2b, runs on 2.5GB VRAM at 184 tps with only 0.6% drop in accuracy (OS) ๐ŸŒš
> ByteDance released BAGEL-7B, an omni model that understands and generates both image + text. they also released Dolphin, a document parsing VLM ๐Ÿฌ (OS)
> Google DeepMind dropped MedGemma in I/O, VLM that can interpret medical scans, and Gemma 3n, an omni model with competitive LLM performance

> MMaDa is a new 8B diffusion language model that can generate image and text



LLMs
> Mistral released Devstral, a 24B coding assistant (OS) ๐Ÿ‘ฉ๐Ÿปโ€๐Ÿ’ป
> Fairy R1-32B is a new reasoning model -- distilled version of DeepSeek-R1-Distill-Qwen-32B (OS)
> NVIDIA released ACEReason-Nemotron-14B, new 14B math and code reasoning model
> sarvam-m is a new Indic LM with hybrid thinking mode, based on Mistral Small (OS)
> samhitika-0.0.1 is a new Sanskrit corpus (BookCorpus translated with Gemma3-27B)

image generation ๐ŸŽจ
> MTVCrafter is a new human motion animation generator