SmolVLM2: Bringing Video Understanding to Every Device
TL;DR: SmolVLM can now watch 📺 with even better visual understanding
SmolVLM2 represents a fundamental shift in how we think about video understanding - moving from massive models that require substantial computing resources to efficient models that can run anywhere. Our goal is simple: make video understanding accessible across all devices and use cases, from phones to servers.
We are releasing models in three sizes (2.2B, 500M and 256M), MLX ready (Python and Swift APIs) from day zero. We've made all models and demos available in this collection.
Want to try SmolVLM2 right away? Check out our interactive chat interface where you can test visual and video understanding capabilities of SmolVLM2 2.2B through a simple, intuitive interface.
Table of Contents
- SmolVLM2: Bringing Video Understanding to Every Device
Technical Details
We are introducing three new models with 256M, 500M and 2.2B parameters. The 2.2B model is the go-to choice for vision and video tasks, while the 500M and 256M models represent the smallest video language models ever released.
While they're small in size, they outperform any existing models per memory consumption. Looking at Video-MME (the go-to scientific benchmark in video), SmolVLM2 joins frontier model families on the 2B range and we lead the pack in the even smaller space.

Video-MME stands out as a comprehensive benchmark due to its extensive coverage across diverse video types, varying durations (11 seconds to 1 hour), multiple data modalities (including subtitles and audio), and high-quality expert annotations spanning 900 videos totaling 254 hours. Learn more here.
SmolVLM2 2.2B: Our New Star Player for Vision and Video
Compared with the previous SmolVLM family, our new 2.2B model got better at solving math problems with images, reading text in photos, understanding complex diagrams, and tackling scientific visual questions. This shows in the model performance across different benchmarks:

When it comes to video tasks, 2.2B is a good bang for the buck. Across the various scientific benchmarks we evaluated it on, we want to highlight its performance on Video-MME where it outperforms all existing 2B models.
We were able to achieve a good balance on video/image performance thanks to the data mixture learnings published in Apollo: An Exploration of Video Understanding in Large Multimodal Models
It’s so memory efficient, that you can run it even in a free Google Colab.
Python Code
# Make sure we are running the latest version of Transformers
!pip install git+https://github.com/huggingface/transformers.git
from transformers import AutoProcessor, AutoModelForImageTextToText
model_path = "HuggingFaceTB/SmolVLM2-2.2B-Instruct"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForImageTextToText.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2"
).to("cuda")
messages = [
{
"role": "user",
"content": [
{"type": "video", "path": "path_to_video.mp4"},
{"type": "text", "text": "Describe this video in detail"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
Going Even Smaller: Meet the 500M and 256M Video Models
Nobody dared to release such small video models until today.
Our new SmolVLM2-500M-Video-Instruct model has video capabilities very close to SmolVLM 2.2B, but at a fraction of the size: we're getting the same video understanding capabilities with less than a quarter of the parameters 🤯.
And then there's our little experiment, the SmolVLM2-256M-Video-Instruct. Think of it as our "what if" project - what if we could push the boundaries of small models even further? Taking inspiration from what IBM achieved with our base SmolVLM-256M-Instruct a few weeks ago, we wanted to see how far we could go with video understanding. While it's more of an experimental release, we're hoping it'll inspire some creative applications and specialized fine-tuning projects.
Suite of SmolVLM2 Demo applications
To demonstrate our vision in small video models, we've built three practical applications that showcase the versatility of these models.
iPhone Video Understanding
|
We've created an iPhone app running SmolVLM2 completely locally. Using our 500M model, users can analyze and understand video content directly on their device - no cloud required. Interested in building iPhone video processing apps with AI models running locally? We're releasing it very soon - fill this form to test and build with us! |
VLC media player integration
|
Working in collaboration with VLC media player, we're integrating SmolVLM2 to provide intelligent video segment descriptions and navigation. This integration allows users to search through video content semantically, jumping directly to relevant sections based on natural language descriptions. While this is work in progress, you can experiment with the current playlist builder prototype in this space. |
Video Highlight Generator
|
Available as a Hugging Face Space, this application takes long-form videos (1+ hours) and automatically extracts the most significant moments. We've tested it extensively with soccer matches and other lengthy events, making it a powerful tool for content summarization. Try it yourself in our demo space. |
Using SmolVLM2 with Transformers and MLX
We make SmolVLM2 available to use with transformers and MLX from day zero. In this section, you can find different inference alternatives and tutorials for video and multiple images.
Transformers
The easiest way to run inference with the SmolVLM2 models is through the conversational API – applying the chat template takes care of preparing all inputs automatically.
You can load the model as follows.
# Make sure we are running the latest version of Transformers
!pip install git+https://github.com/huggingface/transformers.git
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForImageTextToText.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2"
).to(DEVICE)
Video Inference
You can pass videos through a chat template by passing in {"type": "video", "path": {video_path}
. See below for a complete example.
import torch
messages = [
{
"role": "user",
"content": [
{"type": "video", "path": "path_to_video.mp4"},
{"type": "text", "text": "Describe this video in detail"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
Multiple Image Inference
In addition to video, SmolVLM2 supports multi-image conversations. You can use the same API through the chat template.
import torch
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "What are the differences between these two images?"},
{"type": "image", "path": "image_1.png"},
{"type": "image", "path": "image_2.png"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
Inference with MLX
To run SmolVLM2 with MLX on Apple Silicon devices using Python, you can use the excellent mlx-vlm library.
First, you need to install mlx-vlm
from this branch using the following command:
pip install git+https://github.com/pcuenca/mlx-vlm.git@smolvlm
Then you can run inference on a single image using the following one-liner, which uses the unquantized 500M version of SmolVLM2:
python -m mlx_vlm.generate \
--model mlx-community/SmolVLM2-500M-Video-Instruct-mlx \
--image https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg \
--prompt "Can you describe this image?"
We also created a simple script for video understanding. You can use it like follows:
python -m mlx_vlm.smolvlm_video_generate \
--model mlx-community/SmolVLM2-500M-Video-Instruct-mlx \
--system "Focus only on describing the key dramatic action or notable event occurring in this video segment. Skip general context or scene-setting details unless they are crucial to understanding the main action." \
--prompt "What is happening in this video?" \
--video /Users/pedro/Downloads/IMG_2855.mov \
--prompt "Can you describe this image?"
Note that the system prompt is important to bend the model to the desired behaviour. You can use it to, for example, describe all scenes and transitions, or to provide a one-sentence summary of what's going on.
Swift MLX
The Swift language is also supported through the mlx-swift-examples repo, which is what we used to build our iPhone app.
Until our in-progress PR is finalized and merged, you have to compile the project from this fork, and then you can use the llm-tool
CLI on your Mac like follows.
For image inference:
./mlx-run --debug llm-tool \
--model mlx-community/SmolVLM2-500M-Video-Instruct-mlx \
--prompt "Can you describe this image?" \
--image https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg \
--temperature 0.7 --top-p 0.9 --max-tokens 100
Video analysis is also supported, as well as providing a system prompt. We found system prompts to be particularly helpful for video understanding, to drive the model to the desired level of detail we are interested in. This is a video inference example:
./mlx-run --debug llm-tool \
--model mlx-community/SmolVLM2-500M-Video-Instruct-mlx \
--system "Focus only on describing the key dramatic action or notable event occurring in this video segment. Skip general context or scene-setting details unless they are crucial to understanding the main action." \
--prompt "What is happening in this video?" \
--video /Users/pedro/Downloads/IMG_2855.mov \
--temperature 0.7 --top-p 0.9 --max-tokens 100
If you integrate SmolVLM2 in your apps using MLX and Swift, we'd love to know about it! Please, feel free to drop us a note in the comments section below!
Fine-tuning SmolVLM2
You can fine-tune SmolVLM2 on videos using transformers 🤗 We have fine-tuned 500M variant in Colab on video-caption pairs in VideoFeedback dataset for demonstration purposes. Since 500M variant is small, it's better to apply full fine-tuning instead of QLoRA or LoRA, meanwhile you can try to apply QLoRA on cB variant. You can find the fine-tuning notebook here.
Read More
We would like to thank Raushan Turganbay, Arthur Zucker and Pablo Montalvo Leroux for their contribution of the model to transformers.
We are looking forward to see all the things you'll build with SmolVLM2! If you'd like to learn more about SmolVLM family of models, feel free to read the following: