AI & ML interests

None defined yet.

Recent Activity

merveΒ 
posted an update about 9 hours ago
view post
Post
156
Dataset Viewer for PDFs just landed on Hugging Face πŸ“–πŸ€— you can now preview all the PDFs easier than before!

on top of this, there's PdfFolder format to load the PDF datasets quicker πŸ’¨
> to use it, your dataset should follow a directory format like folder/train/doc1.pdf, folder/train/doc1.pdf
> if you want to include bounding boxes, labels etc. you can keep them in a metadata.csv file in the same folder 🀝

read document dataset docs https://huggingface.co/docs/datasets/main/en/document_dataset
check all the document datasets here https://huggingface.co/datasets?modality=modality:document&sort=trending πŸ“–
freddyaboultonΒ 
posted an update 1 day ago
albertvillanovaΒ 
posted an update 2 days ago
view post
Post
1321
πŸš€ SmolAgents v1.19.0 is live!
This release brings major improvements to agent flexibility, UI usability, streaming architecture, and developer experience: making it easier than ever to build smart, interactive AI agents. Here's what's new:

πŸ”§ Agent Upgrades
- Support for managed agents in ToolCallingAgent
- Context manager support for cleaner agent lifecycle handling
- Output formatting now uses XML tags for consistency

πŸ–₯️ UI Enhancements
- GradioUI now supports reset_agent_memory: perfect for fresh starts in dev & demos.

πŸ”„ Streaming Refactor
- Streaming event aggregation moved off the Model class
- ➑️ Better architecture & maintainability

πŸ“¦ Output Tracking
- CodeAgent outputs are now stored in ActionStep
- βœ… More visibility and structure to agent decisions

πŸ› Bug Fixes
- Smarter planning logic
- Cleaner Docker logs
- Better prompt formatting for additional_args
- Safer internal functions and final answer matching

πŸ“š Docs Improvements
- Added quickstart examples with tool usage
- One-click Colab launch buttons
- Expanded reference docs (AgentMemory, GradioUI docstrings)
- Fixed broken links and migrated to .md format

πŸ”— Full release notes:
https://github.com/huggingface/smolagents/releases/tag/v1.19.0

πŸ’¬ Try it out, explore the new features, and let us know what you build!

#smolagents #opensource #AIagents #LLM #HuggingFace
merveΒ 
posted an update 2 days ago
view post
Post
467
we've merged LightGlue keypoint matcher to Hugging Face transformers! it allows commercial use when paired with an open-source keypoint detector πŸ™πŸ»

it works very well, try it yourself: ETH-CVG/LightGlue

here's an in-the-wild test with two images of the same place ‡️
  • 1 reply
Β·
merveΒ 
posted an update 3 days ago
view post
Post
4192
Release picks of the past week is here! Find more models, datasets, Spaces here merve/june-20-releases-68594824d1f4dfa61aee3433

πŸ–ΌοΈ VLMs/OCR
> moonshotai/Kimi-VL-A3B-Thinking-2506 is a powerful reasoning vision LM, 3B active params, smarter with less tokens, supports long documents, videos πŸ‘ (OS)
> nanonets/Nanonets-OCR-s is 3.75B params OCR model based on Qwen2.5VL-3B-Instruct (OS)

πŸ’¬ LLMs
> moonshotai/Kimi-Dev-72B is a strong coding model based on Qwen2.5-72B (OS)
> Mistral released mistralai/Mistral-Small-3.2-24B-Instruct-2506, an update to their former model with better function calling & instruction following (OS)

πŸ—£οΈ Audio
> Google released google/magenta-realtime, real time music generation & audio synthesis (cc-by-4)
> kyutai released new speech-to-text models that come in 1B & 2B ( kyutai/stt-1b-en_fr, stt-2b-en_fr) with 0.5s and 2.5s delay

3D
> Tencent released tencent/Hunyuan3D-2.1 an image-to-3D model (see below)
merveΒ 
posted an update 4 days ago
merveΒ 
posted an update 6 days ago
merveΒ 
posted an update 7 days ago
view post
Post
1850
stop using VLMs blindly βœ‹πŸ»

compare different VLM outputs on a huge variety of inputs (from reasoning to OCR!) πŸ”₯ visionLMsftw/comparevlms

> has support for multiple VLMs: google/gemma-3-27b-it, Qwen/Qwen2.5-VL-7B-Instruct, Qwen/Qwen2.5-VL-32B-Instruct, meta-llama/Llama-4-Maverick-17B-128E-Instruct, HuggingFaceTB/SmolVLM2-2.2B-Instruct
> recommend us new models or inputs, we'll add 🫑

so far I figured out
> for fact-checks, you need a relatively bigger size (7B is ok!)
> Gemma 3 gets downgrade without pan and scan (especially for πŸ“‘)
> Qwen2.5VL-32B is very talkative, great for reasoning but not good for simple tasks πŸ—£οΈ
  • 2 replies
Β·
merveΒ 
posted an update 8 days ago
view post
Post
3567
Releases of the past week are here merve/releases-june-13-6852c3c1eaf1e0c24c958860

Here's our picks πŸ€“
So many interesting models released past week in open AI! πŸ€–

πŸ–ΌοΈ Computer Vision/VLMs
> nanonets/Nanonets-OCR-s is the new state-of-the-art OCR model that can handle checkboxes, watermarks, tables (OS)
> Meta released facebook/v-jepa-2-6841bad8413014e185b497a6, new sota video embeddings with two new classification models (OS)
> ByteDance-Seed/SeedVR2-3B is a new 3B video restoration model (OS)

Audio
> Stepfun released stepfun-ai/Step-Audio-AQAA, new large (137B 🀯) audio language model that takes in audio and generates audio (OS)

πŸ€– Robotics
> nvidia released nvidia/GR00T-N1.5-3B, new open foundation vision language action model

3D
> tencent/Hunyuan3D-2.1 is the new version of Hunyuan by Tencent that can generate 3D assets from text and image prompts
merveΒ 
posted an update 9 days ago
view post
Post
3503
IN: video fine-tuning support for facebook V-JEPA 2 in HF transformers πŸ”₯

it comes with
> four models fine-tuned on Diving48 and SSv2 dataset facebook/v-jepa-2-6841bad8413014e185b497a6
> FastRTC demo on V-JEPA2 SSv2 qubvel-hf/vjepa2-streaming-video-classification
> fine-tuning script on UCF-101 https://gist.github.com/ariG23498/28bccc737c11d1692f6d0ad2a0d7cddb
> fine-tuning notebook on UCF-101 https://colab.research.google.com/drive/16NWUReXTJBRhsN3umqznX4yoZt2I7VGc?usp=sharing
we're looking forward to see what you will build! πŸ€—
merveΒ 
posted an update 10 days ago
view post
Post
2419
#CVPR2025 Paper Picks #1
VisionZip is a compression technique that reduces number of visual tokens to improve performance AND prefill time for vision language models
demo: Senqiao/VisionZip
paper: VisionZip: Longer is Better but Not Necessary in Vision Language Models (2412.04467)
most of the image tokens are redundant for the LLM, so the authors ask "are all visual tokens necessary?"

the method is simple:
find which tokens have the highest attention score, merge rest of the tokens based on similarity, then merge both

their method is both training-free and for fine-tuning
the authors report 5 point improvement on average of vision language tasks + 8x improvement in prefilling time for Llava-Next 7B and 13B 🀯

removing redundant tokens improve image token quality too πŸ₯Ή
merveΒ 
posted an update 10 days ago
view post
Post
3640
stop writing CUDA kernels yourself

we have launched Kernel Hub: easy optimized kernels for all models on Hugging Face πŸ”₯ use them right away!
it's where the community populates optimized kernels 🀝

this release comes in three parts
> Kernel Hub: contains (as of now) 14 kernels
> kernels: Python library to load kernels from Kernel Hub
> kernel-builder: Nix package to build kernels for PyTorch (made using PyTorch C++ frontend)

when building models, your regular workflow should be pulling kernels from Hub and building your model with them πŸ€—
here's a practical example with RMSNorm:
1. pull the kernel from Hub with get_kernel
2. decorate with use_kernel_forward_from_hub
3. inject it to your model
we'd love to hear your feedback! πŸ™πŸ»
we also welcome kernel contributions by community πŸ₯ΉπŸ’—

- request kernels here: kernels-community/README#1
- check out this org: kernels-community
- read the blog: https://huggingface.co/blog/hello-hf-kernels
  • 1 reply
Β·
merveΒ 
posted an update 13 days ago
view post
Post
697
Dolphin: new OCR model by ByteDance with MIT license 🐬

the model first detects element in the layout (table, formula etc) and then parses each element in parallel for generation
Model: ByteDance/Dolphin
Try the demo: ByteDance/Dolphin
merveΒ 
posted an update 15 days ago
view post
Post
1364
stop building parser pipelines πŸ‘‹πŸ»
there's a new document parser that is small, fast, Apache 2.0 licensed and is better than all the other ones! 😱

echo840/MonkeyOCR is a 3B model that can parse everything (charts, formules, tables etc) in a document 🀠
> the authors show in the paper that document parsing pipelines often have errors propagating back
> using singular e2e models are better but they're too heavy to use

this model addresses both: it's lighter, faster, stronger πŸ”₯
merveΒ 
posted an update 15 days ago
view post
Post
1595
Meta just released V-JEPA 2: new open-source image/video world models β―οΈπŸ€— facebook/v-jepa-2-6841bad8413014e185b497a6

> based on ViT, different sizes (L/G/H) and resolution (286/384)
> 0-day support in πŸ€— transformers
> comes with a physical reasoning (from video) benchmark: MVPBench, IntPhys 2, and CausalVQA facebook/physical_reasoning_leaderboard

Read more https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
We will release a fine-tuning notebook with task-specific models in transformers format soon, stay tuned!
freddyaboultonΒ 
posted an update 17 days ago
view post
Post
483
Time is running out! ⏰

Less than 24 hours to participate in the MCP Hackathon and win thousands of dollars in prizes! Don't miss this opportunity to showcase your skills.

Visit Agents-MCP-Hackathon/AI-Marketing-Content-Creator to register!

freddyaboultonΒ 
posted an update 17 days ago
view post
Post
352
🚨 NotebookLM Dethroned?! 🚨

Meet Fluxions vui: The new open-source dialogue generation model.
🀯 100M Params, 40k hours audio!
πŸŽ™οΈ Multi-speaker audio
πŸ˜‚ Non-speech sounds (like [laughs]!)
πŸ“œ MIT License

Is this the future of content creation? Watch the video and decide for yourself!

https://huggingface.co/spaces/fluxions/vui-spacehttps://huggingface.co/fluxions/vui
  • 1 reply
Β·
merveΒ 
posted an update 21 days ago