Image-Text-to-Text
Transformers
ONNX
Safetensors
English
idefics3
conversational

Fix pipeline tag and add link to paper

#30
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -1,21 +1,24 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
 
4
  datasets:
5
  - HuggingFaceM4/the_cauldron
6
  - HuggingFaceM4/Docmatix
7
- pipeline_tag: image-text-to-text
8
  language:
9
  - en
10
- base_model:
11
- - HuggingFaceTB/SmolLM2-1.7B-Instruct
12
- - google/siglip-so400m-patch14-384
13
  ---
14
 
 
15
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM.png" width="800" height="auto" alt="Image description">
16
 
17
  # SmolVLM
18
 
 
 
19
  SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks.
20
 
21
  ## Model Summary
@@ -188,4 +191,5 @@ You can cite us in the following way:
188
  journal={arXiv preprint arXiv:2504.05299},
189
  year={2025}
190
  }
 
191
  ```
 
1
  ---
2
+ base_model:
3
+ - HuggingFaceTB/SmolLM2-1.7B-Instruct
4
+ - google/siglip-so400m-patch14-384
5
  datasets:
6
  - HuggingFaceM4/the_cauldron
7
  - HuggingFaceM4/Docmatix
 
8
  language:
9
  - en
10
+ library_name: transformers
11
+ license: apache-2.0
12
+ pipeline_tag: image-text-to-text
13
  ---
14
 
15
+ ```markdown
16
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM.png" width="800" height="auto" alt="Image description">
17
 
18
  # SmolVLM
19
 
20
+ This repository contains the model for [SmolVLM: Redefining small and efficient multimodal models](https://huggingface.co/papers/2504.05299).
21
+
22
  SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks.
23
 
24
  ## Model Summary
 
191
  journal={arXiv preprint arXiv:2504.05299},
192
  year={2025}
193
  }
194
+ ```
195
  ```