Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- openai/clip-vit-large-patch14-336
|
7 |
+
- Qwen/Qwen2-72B
|
8 |
+
pipeline_tag: image-text-to-text
|
9 |
+
tags:
|
10 |
+
- multimodal
|
11 |
+
- olmo
|
12 |
+
- molmo
|
13 |
+
- pixmo
|
14 |
+
---
|
15 |
+
|
16 |
+
<img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;">
|
17 |
+
|
18 |
+
# Molmo 72B-D
|
19 |
+
|
20 |
+
Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
|
21 |
+
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog).
|
22 |
+
|
23 |
+
Molmo 72B-D is based on [Qwen2-72B](https://huggingface.co/Qwen/Qwen2-72B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
|
24 |
+
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
|
25 |
+
|
26 |
+
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
|
27 |
+
|
28 |
+
[**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
## Quick Start
|
33 |
+
|
34 |
+
To run Molmo, first install dependencies:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
pip install einops tensorflow torchvision
|
38 |
+
```
|
39 |
+
|
40 |
+
Then, follow these steps:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
|
44 |
+
from PIL import Image
|
45 |
+
import requests
|
46 |
+
|
47 |
+
# load the processor
|
48 |
+
processor = AutoProcessor.from_pretrained(
|
49 |
+
'allenai/Molmo-7B-D-0924',
|
50 |
+
trust_remote_code=True,
|
51 |
+
torch_dtype='auto',
|
52 |
+
device_map='auto'
|
53 |
+
)
|
54 |
+
|
55 |
+
# load the model
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
57 |
+
'allenai/Molmo-7B-D-0924',
|
58 |
+
trust_remote_code=True,
|
59 |
+
torch_dtype='auto',
|
60 |
+
device_map='auto'
|
61 |
+
)
|
62 |
+
|
63 |
+
# process the image and text
|
64 |
+
inputs = processor.process(
|
65 |
+
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
|
66 |
+
text="Describe this image."
|
67 |
+
)
|
68 |
+
|
69 |
+
# move inputs to the correct device and make a batch of size 1
|
70 |
+
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
|
71 |
+
|
72 |
+
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
|
73 |
+
output = model.generate_from_batch(
|
74 |
+
inputs,
|
75 |
+
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
|
76 |
+
tokenizer=processor.tokenizer
|
77 |
+
)
|
78 |
+
|
79 |
+
# only get generated tokens; decode them to text
|
80 |
+
generated_tokens = output[0,inputs['input_ids'].size(1):]
|
81 |
+
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
|
82 |
+
|
83 |
+
# print the generated text
|
84 |
+
print(generated_text)
|
85 |
+
|
86 |
+
# >>> This photograph captures an adorable black Labrador puppy sitting on a weathered
|
87 |
+
# wooden deck. The deck's planks, which are a mix of light and dark brown with ...
|
88 |
+
```
|
89 |
+
|
90 |
+
## Evaluations
|
91 |
+
|
92 |
+
| Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|
93 |
+
|-----------------------------|-----------------------------------------|-----------------------------|
|
94 |
+
| Molmo 72B | 81.2 | 1077 |
|
95 |
+
| **Molmo 7B-D (this model)** | **77.3** | **1056** |
|
96 |
+
| Molmo 7B-O | 74.6 | 1051 |
|
97 |
+
| MolmoE 1B | 68.6 | 1032 |
|
98 |
+
| GPT-4o | 78.5 | 1079 |
|
99 |
+
| GPT-4V | 71.1 | 1041 |
|
100 |
+
| Gemini 1.5 Pro | 78.3 | 1074 |
|
101 |
+
| Gemini 1.5 Flash | 75.1 | 1054 |
|
102 |
+
| Claude 3.5 Sonnet | 76.7 | 1069 |
|
103 |
+
| Claude 3 Opus | 66.4 | 971 |
|
104 |
+
| Claude 3 Haiku | 65.3 | 999 |
|
105 |
+
| Qwen VL2 72B | 79.4 | 1037 |
|
106 |
+
| Qwen VL2 7B | 73.7 | 1025 |
|
107 |
+
| Intern VL2 LLAMA 76B | 77.1 | 1018 |
|
108 |
+
| Intern VL2 8B | 69.4 | 953 |
|
109 |
+
| Pixtral 12B | 69.5 | 1016 |
|
110 |
+
| Phi3.5-Vision 4B | 59.7 | 982 |
|
111 |
+
| PaliGemma 3B | 50.0 | 937 |
|
112 |
+
| LLAVA OneVision 72B | 76.6 | 1051 |
|
113 |
+
| LLAVA OneVision 7B | 72.0 | 1024 |
|
114 |
+
| Cambrian-1 34B | 66.8 | 953 |
|
115 |
+
| Cambrian-1 8B | 63.4 | 952 |
|
116 |
+
| xGen - MM - Interleave 4B | 59.5 | 979 |
|
117 |
+
| LLAVA-1.5 13B | 43.9 | 960 |
|
118 |
+
| LLAVA-1.5 7B | 40.7 | 951 |
|
119 |
+
|
120 |
+
*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*
|
121 |
+
|
122 |
+
## License and Use
|
123 |
+
|
124 |
+
This model is licensed under Apache 2.0. It is intended for research and educational use.
|
125 |
+
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
|