Howieeeee commited on
Commit
2431a67
·
verified ·
1 Parent(s): 7f083cd

Upload folder using huggingface_hub

Browse files
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ tags:
7
+ - vision
8
+ - image-text-to-text
9
+ - transformers.js
10
+ datasets:
11
+ - lmms-lab/LLaVA-OneVision-Data
12
+ pipeline_tag: image-text-to-text
13
+ arxiv: 2408.03326
14
+ library_name: transformers
15
+ ---
16
+ # LLaVA-Onevision Model Card
17
+
18
+ ![image/png](llava_onevision_arch.png)
19
+
20
+ Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-4AtYjR8UMtCALV0AswU1kiNkWCLTALT?usp=sharing)
21
+
22
+ Below is the model card of 0.5B LLaVA-Onevision model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-si).
23
+
24
+
25
+
26
+ ## Model details
27
+
28
+ **Model type:**
29
+ LLaVA-Onevision is an open-source multimodal LLM trained by fine-tuning Qwen2 on GPT-generated multimodal instruction-following data.
30
+ LLaVA-OneVision is the first single model that can simultaneously push the performance boundaries of open LMMs in three important computer
31
+ vision scenarios: single-image, multi-image, and video scenarios. Importantly, the design of LLaVA-OneVision allows strong transfer learning
32
+ across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario
33
+ capabilities are demonstrated through task transfer from images to videos.
34
+
35
+ **Model date:**
36
+ LLaVA-Onevision-0.5-ov was added in August 2024.
37
+
38
+ **Paper or resources for more information:**
39
+ https://llava-vl.github.io/
40
+
41
+ - **Architecture:** SO400M + Qwen2
42
+ - **Pretraining Stage:** LCS-558K, 1 epoch, projector
43
+ - **Mid Stage:** A mixture of 4.7M high-quality synthetic data, 1 epoch, full model
44
+ - **Final-Image Stage:** A mixture of 3.6M single-image data, 1 epoch, full model
45
+ - **OneVision Stage:** A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model
46
+ - **Precision:** bfloat16
47
+
48
+
49
+ ## How to use the model
50
+
51
+ First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
52
+ The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applying chat template:
53
+
54
+ ### Using `pipeline`:
55
+
56
+ Below we used [`"llava-hf/llava-onevision-qwen2-0.5b-ov-hf"`](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf) checkpoint.
57
+
58
+ ```python
59
+ from transformers import pipeline
60
+
61
+ pipe = pipeline("image-text-to-text", model="llava-onevision-qwen2-0.5b-ov-hf")
62
+ messages = [
63
+ {
64
+ "role": "user",
65
+ "content": [
66
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"},
67
+ {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
68
+ ],
69
+ },
70
+ ]
71
+
72
+ out = pipe(text=messages, max_new_tokens=20)
73
+ print(out)
74
+ >>> [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg'}, {'type': 'text', 'text': 'What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud'}]}], 'generated_text': 'Lava'}]
75
+ ```
76
+
77
+
78
+ ### Using pure `transformers`:
79
+
80
+ Below is an example script to run generation in `float16` precision on a GPU device:
81
+
82
+ ```python
83
+ import requests
84
+ from PIL import Image
85
+
86
+ import torch
87
+ from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
88
+
89
+ model_id = "llava-hf/llava-onevision-qwen2-0.5b-ov-hf"
90
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
91
+ model_id,
92
+ torch_dtype=torch.float16,
93
+ low_cpu_mem_usage=True,
94
+ ).to(0)
95
+
96
+ processor = AutoProcessor.from_pretrained(model_id)
97
+
98
+ # Define a chat history and use `apply_chat_template` to get correctly formatted prompt
99
+ # Each value in "content" has to be a list of dicts with types ("text", "image")
100
+ conversation = [
101
+ {
102
+
103
+ "role": "user",
104
+ "content": [
105
+ {"type": "text", "text": "What are these?"},
106
+ {"type": "image"},
107
+ ],
108
+ },
109
+ ]
110
+ prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
111
+
112
+ image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
113
+ raw_image = Image.open(requests.get(image_file, stream=True).raw)
114
+ inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(0, torch.float16)
115
+
116
+ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
117
+ print(processor.decode(output[0][2:], skip_special_tokens=True))
118
+ ```
119
+
120
+ -----------
121
+ From transformers>=v4.48, you can also pass image/video url or local path to the conversation history, and let the chat template handle the rest.
122
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
123
+
124
+ ```python
125
+ messages = [
126
+ {
127
+ "role": "user",
128
+ "content": [
129
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
130
+ {"type": "text", "text": "What is shown in this image?"},
131
+ ],
132
+ },
133
+ ]
134
+
135
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
136
+ output = model.generate(**inputs, max_new_tokens=50)
137
+ ```
138
+
139
+ ### Model optimization
140
+
141
+ #### 4-bit quantization through `bitsandbytes` library
142
+
143
+ First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
144
+
145
+ ```diff
146
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
147
+ model_id,
148
+ torch_dtype=torch.float16,
149
+ low_cpu_mem_usage=True,
150
+ + load_in_4bit=True
151
+ )
152
+ ```
153
+
154
+ #### Use Flash-Attention 2 to further speed-up generation
155
+
156
+ First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
157
+
158
+ ```diff
159
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
160
+ model_id,
161
+ torch_dtype=torch.float16,
162
+ low_cpu_mem_usage=True,
163
+ + use_flash_attention_2=True
164
+ ).to(0)
165
+ ```
166
+
167
+
168
+ ### Usage w/ Transformers.js
169
+
170
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
171
+ ```bash
172
+ npm i @huggingface/transformers
173
+ ```
174
+
175
+ **Example:** Multi-round conversations w/ PKV caching
176
+ ```js
177
+ import { AutoProcessor, AutoTokenizer, LlavaOnevisionForConditionalGeneration, RawImage } from '@huggingface/transformers';
178
+
179
+ // Load tokenizer, processor and model
180
+ const model_id = 'llava-hf/llava-onevision-qwen2-0.5b-ov-hf';
181
+
182
+ const tokenizer = await AutoTokenizer.from_pretrained(model_id);
183
+ const processor = await AutoProcessor.from_pretrained(model_id);
184
+ const model = await LlavaOnevisionForConditionalGeneration.from_pretrained(model_id, {
185
+ dtype: {
186
+ embed_tokens: 'fp16', // or 'fp32' or 'q8'
187
+ vision_encoder: 'fp16', // or 'fp32' or 'q8'
188
+ decoder_model_merged: 'q4', // or 'q8'
189
+ },
190
+ // device: 'webgpu',
191
+ });
192
+
193
+ // Prepare text inputs
194
+ const prompt = 'What does the text say?';
195
+ const messages = [
196
+ { role: 'system', content: 'Answer the question.' },
197
+ { role: 'user', content: `<image>\n${prompt}` }
198
+ ]
199
+ const text = tokenizer.apply_chat_template(messages, { tokenize: false, add_generation_prompt: true });
200
+ const text_inputs = tokenizer(text);
201
+
202
+ // Prepare vision inputs
203
+ const url = 'https://huggingface.co/qnguyen3/nanoLLaVA/resolve/main/example_1.png';
204
+ const image = await RawImage.fromURL(url);
205
+ const vision_inputs = await processor(image);
206
+
207
+ // Generate response
208
+ const { past_key_values, sequences } = await model.generate({
209
+ ...text_inputs,
210
+ ...vision_inputs,
211
+ do_sample: false,
212
+ max_new_tokens: 64,
213
+ return_dict_in_generate: true,
214
+ });
215
+
216
+ // Decode output
217
+ const answer = tokenizer.decode(
218
+ sequences.slice(0, [text_inputs.input_ids.dims[1], null]),
219
+ { skip_special_tokens: true },
220
+ );
221
+ console.log(answer);
222
+ // The text says "small but mighty" in a playful font.
223
+
224
+ const new_messages = [
225
+ ...messages,
226
+ { role: 'assistant', content: answer },
227
+ { role: 'user', content: 'How does the text correlate to the context of the image?' }
228
+ ]
229
+ const new_text = tokenizer.apply_chat_template(new_messages, { tokenize: false, add_generation_prompt: true });
230
+ const new_text_inputs = tokenizer(new_text);
231
+
232
+ // Generate another response
233
+ const output = await model.generate({
234
+ ...new_text_inputs,
235
+ past_key_values,
236
+ do_sample: false,
237
+ max_new_tokens: 256,
238
+ });
239
+ const new_answer = tokenizer.decode(
240
+ output.slice(0, [new_text_inputs.input_ids.dims[1], null]),
241
+ { skip_special_tokens: true },
242
+ );
243
+ console.log(new_answer);
244
+ // The text "small but mighty" is likely a playful or humorous reference to the image of the blue mouse with the orange dumbbell. It could be used as a motivational phrase or a playful way to express the idea that even small things can be impressive or powerful.
245
+ ```
246
+
247
+ # Citation
248
+ ```
249
+ @misc{li2024llavaonevisioneasyvisualtask,
250
+ title={LLaVA-OneVision: Easy Visual Task Transfer},
251
+ author={Bo Li and Yuanhan Zhang and Dong Guo and Renrui Zhang and Feng Li and Hao Zhang and Kaichen Zhang and Yanwei Li and Ziwei Liu and Chunyuan Li},
252
+ year={2024},
253
+ eprint={2408.03326},
254
+ archivePrefix={arXiv},
255
+ primaryClass={cs.CV},
256
+ url={https://arxiv.org/abs/2408.03326},
257
+ }
258
+ ```
preprocessor_config.json ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_pad": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_grid_pinpoints": [
8
+ [
9
+ 384,
10
+ 384
11
+ ],
12
+ [
13
+ 384,
14
+ 768
15
+ ],
16
+ [
17
+ 384,
18
+ 1152
19
+ ],
20
+ [
21
+ 384,
22
+ 1536
23
+ ],
24
+ [
25
+ 384,
26
+ 1920
27
+ ],
28
+ [
29
+ 384,
30
+ 2304
31
+ ],
32
+ [
33
+ 768,
34
+ 384
35
+ ],
36
+ [
37
+ 768,
38
+ 768
39
+ ],
40
+ [
41
+ 768,
42
+ 1152
43
+ ],
44
+ [
45
+ 768,
46
+ 1536
47
+ ],
48
+ [
49
+ 768,
50
+ 1920
51
+ ],
52
+ [
53
+ 768,
54
+ 2304
55
+ ],
56
+ [
57
+ 1152,
58
+ 384
59
+ ],
60
+ [
61
+ 1152,
62
+ 768
63
+ ],
64
+ [
65
+ 1152,
66
+ 1152
67
+ ],
68
+ [
69
+ 1152,
70
+ 1536
71
+ ],
72
+ [
73
+ 1152,
74
+ 1920
75
+ ],
76
+ [
77
+ 1152,
78
+ 2304
79
+ ],
80
+ [
81
+ 1536,
82
+ 384
83
+ ],
84
+ [
85
+ 1536,
86
+ 768
87
+ ],
88
+ [
89
+ 1536,
90
+ 1152
91
+ ],
92
+ [
93
+ 1536,
94
+ 1536
95
+ ],
96
+ [
97
+ 1536,
98
+ 1920
99
+ ],
100
+ [
101
+ 1536,
102
+ 2304
103
+ ],
104
+ [
105
+ 1920,
106
+ 384
107
+ ],
108
+ [
109
+ 1920,
110
+ 768
111
+ ],
112
+ [
113
+ 1920,
114
+ 1152
115
+ ],
116
+ [
117
+ 1920,
118
+ 1536
119
+ ],
120
+ [
121
+ 1920,
122
+ 1920
123
+ ],
124
+ [
125
+ 1920,
126
+ 2304
127
+ ],
128
+ [
129
+ 2304,
130
+ 384
131
+ ],
132
+ [
133
+ 2304,
134
+ 768
135
+ ],
136
+ [
137
+ 2304,
138
+ 1152
139
+ ],
140
+ [
141
+ 2304,
142
+ 1536
143
+ ],
144
+ [
145
+ 2304,
146
+ 1920
147
+ ],
148
+ [
149
+ 2304,
150
+ 2304
151
+ ]
152
+ ],
153
+ "image_mean": [
154
+ 0.5,
155
+ 0.5,
156
+ 0.5
157
+ ],
158
+ "image_processor_type": "LlavaOnevisionImageProcessor",
159
+ "image_std": [
160
+ 0.5,
161
+ 0.5,
162
+ 0.5
163
+ ],
164
+ "processor_class": "LlavaOnevisionProcessor",
165
+ "resample": 3,
166
+ "rescale_factor": 0.00392156862745098,
167
+ "size": {
168
+ "height": 384,
169
+ "width": 384
170
+ }
171
+ }
processor_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_token": "<image>",
3
+ "num_image_tokens": 729,
4
+ "processor_class": "LlavaOnevisionProcessor",
5
+ "video_token": "<video>",
6
+ "vision_feature_select_strategy": "full"
7
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|im_end|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "151646": {
29
+ "content": "<image>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "151647": {
37
+ "content": "<video>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "additional_special_tokens": [
46
+ "<|im_start|>",
47
+ "<|im_end|>"
48
+ ],
49
+ "bos_token": null,
50
+ "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
51
+ "clean_up_tokenization_spaces": false,
52
+ "eos_token": "<|im_end|>",
53
+ "errors": "replace",
54
+ "max_length": null,
55
+ "model_max_length": 32768,
56
+ "pad_to_multiple_of": null,
57
+ "pad_token": "<|endoftext|>",
58
+ "pad_token_type_id": 0,
59
+ "padding_side": "right",
60
+ "processor_class": "LlavaOnevisionProcessor",
61
+ "split_special_tokens": false,
62
+ "tokenizer_class": "Qwen2Tokenizer",
63
+ "unk_token": null
64
+ }
video_processor/preprocessor_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_pad": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "LlavaOnevisionVideoProcessor",
13
+ "image_std": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "processor_class": "LlavaOnevisionProcessor",
19
+ "resample": 3,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 384,
23
+ "width": 384
24
+ }
25
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff