File size: 16,900 Bytes
2aa3aa4 c20c304 2aa3aa4 d59754b 2aa3aa4 e87752c 2aa3aa4 e87752c 2aa3aa4 e87752c c20c304 e87752c a5fc16a c20c304 d0fe8a1 c20c304 44e4123 a5fc16a e87752c a5fc16a e87752c c20c304 9f6505d c20c304 a5fc16a c20c304 2aa3aa4 c20c304 2aa3aa4 c20c304 2aa3aa4 c20c304 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 |
---
library_name: transformers
license: apache-2.0
base_model:
- allenai/Molmo-7B-D-0924
base_model_relation: quantized
tags:
- bitsandbytes
- Molmo
- chat
- multimodal
---
## 4-bit quantization of the original [Molmo-7B-D-0924](https://huggingface.co/allenai/Molmo-7B-D-0924) model using ```bitsandbytes```.
NOTE: This quantization differs from [the one here](https://huggingface.co/cyan2k/molmo-7B-D-bnb-4bit) by slightly modifying the source code to remove unnecessary dependencies and otherwise make it work out of the box.
<details><summary>Original Model Card</summary>
# Molmo 7B-D
Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
It powers the **Molmo demo at** [**molmo.allenai.org**](https://molmo.allenai.org).
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
[**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.
Quick links:
- π¬ [Demo](https://molmo.allenai.org/)
- π [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
- π [Paper](https://molmo.allenai.org/paper.pdf)
- π₯ [Blog with Videos](https://molmo.allenai.org/blog)
## Quick Start
To run Molmo, first install dependencies:
```bash
pip install einops torchvision
```
Then, follow these steps:
```python
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# load the processor
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
# print the generated text
print(generated_text)
# >>> This image features an adorable black Labrador puppy, captured from a top-down
# perspective. The puppy is sitting on a wooden deck, which is composed ...
```
To make inference more efficient, run with autocast:
```python
with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
We did most of our evaluation in this setting (autocast on, but float32 weights)
To even further reduce the memory requirements, the model can be run with bfloat16 weights:
```python
model.to(dtype=torch.bfloat16)
inputs["images"] = inputs["images"].to(torch.bfloat16)
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
Note that we have observed that this can change the output of the model compared to running with float32 weights.
## Evaluations
| Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|-----------------------------|-----------------------------------------|-----------------------------|
| Molmo 72B | 81.2 | 1077 |
| **Molmo 7B-D (this model)** | **77.3** | **1056** |
| Molmo 7B-O | 74.6 | 1051 |
| MolmoE 1B | 68.6 | 1032 |
| GPT-4o | 78.5 | 1079 |
| GPT-4V | 71.1 | 1041 |
| Gemini 1.5 Pro | 78.3 | 1074 |
| Gemini 1.5 Flash | 75.1 | 1054 |
| Claude 3.5 Sonnet | 76.7 | 1069 |
| Claude 3 Opus | 66.4 | 971 |
| Claude 3 Haiku | 65.3 | 999 |
| Qwen VL2 72B | 79.4 | 1037 |
| Qwen VL2 7B | 73.7 | 1025 |
| Intern VL2 LLAMA 76B | 77.1 | 1018 |
| Intern VL2 8B | 69.4 | 953 |
| Pixtral 12B | 69.5 | 1016 |
| Phi3.5-Vision 4B | 59.7 | 982 |
| PaliGemma 3B | 50.0 | 937 |
| LLAVA OneVision 72B | 76.6 | 1051 |
| LLAVA OneVision 7B | 72.0 | 1024 |
| Cambrian-1 34B | 66.8 | 953 |
| Cambrian-1 8B | 63.4 | 952 |
| xGen - MM - Interleave 4B | 59.5 | 979 |
| LLAVA-1.5 13B | 43.9 | 960 |
| LLAVA-1.5 7B | 40.7 | 951 |
*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*
## FAQs
### I'm getting an error a broadcast error when processing images!
Your image might not be in RGB format. You can convert it using the following code snippet:
```python
from PIL import Image
image = Image.open(...)
if image.mode != "RGB":
image = image.convert("RGB")
```
### Molmo doesn't work great with transparent images!
We received reports that Molmo models might struggle with transparent images.
For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):
```python
# Load the image
url = "..."
image = Image.open(requests.get(url, stream=True).raw)
# Convert the image to grayscale to calculate brightness
gray_image = image.convert('L') # Convert to grayscale
# Calculate the average brightness
stat = ImageStat.Stat(gray_image)
average_brightness = stat.mean[0] # Get the average value
# Define background color based on brightness (threshold can be adjusted)
bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)
# Create a new image with the same size as the original, filled with the background color
new_image = Image.new('RGB', image.size, bg_color)
# Paste the original image on top of the background (use image as a mask if needed)
new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)
# Now you can pass the new_image to Molmo
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
</details>
<br>
# Usage:
The example script below requires an Nvidia GPU and that you `pip install` the CUDA libraries into your virtual environment. Pip installing the CUDA libraries is NOT required if you install them systemwide (as most people do), in which case just remove the simply remove the ```set_cuda_paths``` function. However, make sure that you've installed compatible CUDA and Pytorch versions.
<details><summary>COMPATIBLE CUDA AND PYTORCH 2.2.2 COMBINATIONS</summary>
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.2.2, the following CUDA versions are required:
- ```pip install nvidia-cublas-cu12==12.1.3.1```
- ```pip install nvidia-cuda-runtime-cu12==12.1.105```
- ```pip install nvidia-cuda-nvrtc-cu12==12.1.105```
- ```pip install nvidia-cudnn-cu12==8.9.2.26```
- Then install [`torch==2.2.2`](https://download.pytorch.org/whl/cu121/torch/), [`torchvision==0.17`](https://download.pytorch.org/whl/cu121/torchvision/), and [`torchaudio==2.2.2`](https://download.pytorch.org/whl/cu121/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
```
pip install https://download.pytorch.org/whl/cu121/torch-2.2.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=efbcfdd4399197d06b32f7c0e1711c615188cdd65427b933648c7478fb880b3f
```
```
pip install https://download.pytorch.org/whl/cu121/torchvision-0.17.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=10ad542aab6b47dbe73c441381986d50a7ed5021cbe01d593a14477ec1f067a0
```
```
pip install https://download.pytorch.org/whl/cu121/torchaudio-2.2.2%2Bcu121-cp311-cp311-win_amd64.whl#sha256=c7dee68cd3d2b889bab71d4a0c345bdc3ea2fe79a62b921a6b49292c605b6071
```
</details>
<details><summary>COMPATIBLE CUDA AND PYTORCH 2.5.1 COMBINATIONS</summary>
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
- ```pip install nvidia-cublas-cu12==12.4.5.8```
- ```pip install nvidia-cuda-runtime-cu12==12.4.127```
- ```pip install nvidia-cuda-nvrtc-cu12==12.4.127```
- ```pip install nvidia-cudnn-cu12==9.1.0.70```
- Then install [`torch==2.5.1`](https://download.pytorch.org/whl/cu124/torch/), [`torchvision==0.20.1`](https://download.pytorch.org/whl/cu124/torchvision/), and [`torchaudio==2.5.1`](https://download.pytorch.org/whl/cu124/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
```
pip install https://download.pytorch.org/whl/cu124/torch-2.5.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=6c8a7003ef1327479ede284b6e5ab3527d3900c2b2d401af15bcc50f2245a59f
```
```
pip install https://download.pytorch.org/whl/cu124/torchvision-0.20.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=15796b453a99ed0f0cbc249d129685ddc88157310135fb3addaf738a15db5306
```
```
pip install https://download.pytorch.org/whl/cu124/torchaudio-2.5.1%2Bcu124-cp311-cp311-win_amd64.whl#sha256=b3d75f4e6efc5412fe78c7f2787ee4f39cea1317652e1a47785879cde109f5c4
```
</details>
<details><summary>COMPATIBLE CUDA AND PYTORCH 2.6.0 COMBINATIONS</summary>
Pytorch is only tested with specific versions of CUDA. When using pytorch 2.5.1, the following CUDA versions are required:
- ```pip install nvidia-cublas-cu12==12.6.4.1```
- ```pip install nvidia-cuda-runtime-cu12==12.6.77```
- ```pip install nvidia-cuda-nvrtc-cu12==12.6.77```
- ```pip install nvidia-cudnn-cu12==9.5.1.17```
- Then install [`torch==2.6.0`](https://download.pytorch.org/whl/cu126/torch/), [`torchvision==0.21.0`](https://download.pytorch.org/whl/cu126/torchvision/), and [`torchaudio==2.6.0`](https://download.pytorch.org/whl/cu126/torchaudio/) by visiting each of these three links and creating a `pip install` command based on the link for your Python version and platform.
For example, for Windows using Python 3.11 you would use the following:
```
pip install https://download.pytorch.org/whl/cu126/torch-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=5ddca43b81c64df8ce0c59260566e648ee46b2622ab6a718e38dea3c0ca059a1
```
```
pip install https://download.pytorch.org/whl/cu126/torchvision-0.21.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=ddbf4516fbb7624ac42934b877dcf6a3b295d9914ab89643b55dedb9c9773ce4
```
```
pip install https://download.pytorch.org/whl/cu126/torchaudio-2.6.0%2Bcu126-cp311-cp311-win_amd64.whl#sha256=833b8e350c77021400fed2271df10ecd02b88f684bbc9d57132faa0efc9a0a57
```
</details>
Example script (process single image):
```Python
import sys
import os
from pathlib import Path
def set_cuda_paths():
venv_base = Path(sys.executable).parent.parent
nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
cuda_path = nvidia_base_path / 'cuda_runtime' / 'bin'
cublas_path = nvidia_base_path / 'cublas' / 'bin'
cudnn_path = nvidia_base_path / 'cudnn' / 'bin'
nvrtc_path = nvidia_base_path / 'cuda_nvrtc' / 'bin'
paths_to_add = [
str(cuda_path),
str(cublas_path),
str(cudnn_path),
str(nvrtc_path),
]
env_vars = ['CUDA_PATH', 'PATH']
for env_var in env_vars:
current_value = os.environ.get(env_var, '')
new_value = os.pathsep.join(paths_to_add + [current_value] if current_value else paths_to_add)
os.environ[env_var] = new_value
set_cuda_paths()
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
model_path = r"[INSERT THE PATH TO THE FOLDER HOLDING THE MODEL FILES HERE]"
class VisionModel:
def __init__(self):
self.model = None
self.processor = None
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def initialize_model_and_processor(self):
self.processor = AutoProcessor.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
def process_single_image(self, image_path):
image = Image.open(image_path)
if image.mode != "RGB":
image = image.convert("RGB")
text = "Describe this image in detail as possible but be succinct and don't repeat yourself."
inputs = self.processor.process(images=[image], text=text)
inputs = {k: v.to(self.device).unsqueeze(0) for k, v in inputs.items()}
output = self.model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=500, stop_strings=["<|endoftext|>"]),
tokenizer=self.processor.tokenizer
)
generated_text = self.processor.tokenizer.decode(output[0, inputs['input_ids'].size(1):], skip_special_tokens=True)
print(f"\nGenerated Text:\n{generated_text}\n")
if __name__ == "__main__":
image_path = r"[INSERT THE PATH TO THE IMAGE YOU WANT TO PROCESS HERE]"
vision_model = VisionModel()
vision_model.initialize_model_and_processor()
vision_model.process_single_image(image_path)
``` |