Improve model card for LLaVA_MORE-phi_4-finetuning (#1)
Browse files- Improve model card for LLaVA_MORE-phi_4-finetuning (186e6bcde7e8d836531510161cd4075fcdcd45db)
- readme (b73830037077acf43127d9fa46ce1a4d37f222e1)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,199 +1,158 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
- **
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
|
32 |
-
-
|
33 |
-
-
|
34 |
-
-
|
|
|
|
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
|
74 |
-
[
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
|
|
|
|
|
|
79 |
|
80 |
-
|
|
|
|
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
|
|
|
92 |
|
93 |
-
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
|
97 |
-
|
|
|
|
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
|
179 |
-
|
180 |
|
181 |
-
|
|
|
|
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
186 |
|
187 |
-
[More Information Needed]
|
188 |
|
189 |
-
|
190 |
|
191 |
-
|
192 |
|
193 |
-
##
|
194 |
|
195 |
-
[
|
196 |
|
197 |
-
##
|
|
|
198 |
|
199 |
-
[
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
+
license: apache-2.0
|
5 |
+
tags:
|
6 |
+
- multimodal
|
7 |
+
- vision-language-model
|
8 |
+
- llava
|
9 |
+
- instruction-tuned
|
10 |
+
- phi-4
|
11 |
+
- vqa
|
12 |
+
base_model: microsoft/Phi-4-mini-instruct
|
13 |
---
|
14 |
|
15 |
+
# Model Card for LLaVA_MORE-phi_4-finetuning
|
16 |
|
17 |
+
<div align="center">
|
18 |
+
<!-- <img src="https://github.com/aimagelab/LLaVA-MORE/blob/main/images/image_no_back.png" width="200" height="200"> -->
|
19 |
+
<h1> 🔥 LLaVA-MORE 🔥
|
20 |
+
|
21 |
+
A Comparative Study of LLMs and Visual Backbones <br>for Enhanced Visual Instruction Tuning
|
22 |
+
</h1>
|
23 |
+
</div>
|
24 |
|
25 |
+
This model is part of the **LLaVA-MORE** family of Multimodal Large Language Models (MLLMs), presented in the paper [LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning](https://huggingface.co/papers/2503.15621).
|
26 |
|
27 |
+
LLaVA-MORE integrates recent language models with diverse visual backbones. It employs a unified training protocol applied consistently across all architectures to ensure fair comparisons and systematically explore the trade-offs between model size, architecture, and performance. This model, `LLaVA_MORE-phi_4-finetuning`, uses **Phi-4 Instruct** as its LLM backbone and is finetuned on the [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) dataset.
|
28 |
|
29 |
+
It is designed for multimodal reasoning, generation, and instruction following, and provides insights into the design of more effective MLLMs.
|
30 |
|
31 |
+
## Citation
|
32 |
+
If you make use of our work, please cite our paper:
|
33 |
|
34 |
+
```bibtex
|
35 |
+
@inproceedings{cocchi2025llava,
|
36 |
+
title={{LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning}},
|
37 |
+
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Baraldi, Lorenzo and Cornia, Marcella and Cucchiara, Rita},
|
38 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops},
|
39 |
+
year={2025}
|
40 |
+
}
|
41 |
+
```
|
42 |
+
|
43 |
+
## Model Details
|
44 |
|
45 |
+
### Model Description
|
46 |
|
47 |
+
This is a checkpoint from the LLaVA-MORE family of MLLMs. It integrates the **Phi-4 Instruct** Large Language Model with a visual backbone (specifically, `openai/clip-vit-large-patch14-336` as per `config.json`). It has been finetuned on the `LLaVA-Instruct-665K` dataset. The project aims to provide a reproducible evaluation framework to guide future model development by systematically studying the impact of different LLMs and visual encoders, as well as factors like image resolution and pre-training datasets.
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
+
- **Developed by:** Federico Cocchi, Nicholas Moratelli, Davide Caffagni, Sara Sarto, Lorenzo Baraldi, Marcella Cornia and Rita Cucchiara (AImageLab, University of Modena and Reggio Emilia).
|
50 |
+
- **Model type:** Multimodal Large Language Model (MLLM) / Vision-Language Model
|
51 |
+
- **Language(s):** English
|
52 |
+
- **License:** Apache-2.0
|
53 |
+
- **Finetuned from model:** `microsoft/Phi-4-mini-instruct`
|
54 |
|
55 |
+
### Model Sources
|
56 |
|
57 |
+
- **Repository:** [https://github.com/aimagelab/LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE)
|
58 |
+
- **Paper:** [https://huggingface.co/papers/2503.15621](https://huggingface.co/papers/2503.15621)
|
59 |
+
- **Project Website:** [https://aimagelab.ing.unimore.it/imagelab](https://aimagelab.ing.unimore.it/imagelab)
|
60 |
+
- **Hugging Face Collection:** [LLaVA-MORE Models](https://huggingface.co/collections/aimagelab/llava-more-66aa6c49167e190bf27e7be4)
|
61 |
+
- **Hugging Face Demo:** [https://huggingface.co/spaces/aimagelab/LLaVA-MORE](https://huggingface.co/spaces/aimagelab/LLaVA-MORE)
|
62 |
|
63 |
## Uses
|
64 |
|
|
|
|
|
65 |
### Direct Use
|
66 |
|
67 |
+
This model is intended for various multimodal reasoning, generation, and instruction-following tasks. It can be used to process visual inputs in conjunction with textual prompts to generate informative and relevant text responses. Typical applications include visual question answering, image captioning, and conversational AI involving images.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
### Out-of-Scope Use
|
70 |
|
71 |
+
This model is not intended for generating harmful content, engaging in misinformation, or being deployed in applications without proper human oversight. As an AI model, it may hallucinate or produce factually incorrect information. It should not be used in safety-critical applications without thorough domain-specific evaluation and mitigation strategies.
|
|
|
|
|
72 |
|
73 |
## Bias, Risks, and Limitations
|
74 |
|
75 |
+
Given that the model is trained on large datasets, it may inherit biases present in the data, leading to biased outputs. Potential risks include generating offensive, inaccurate, or harmful content. Like all generative models, it may also hallucinate or provide factually incorrect information.
|
|
|
|
|
76 |
|
77 |
### Recommendations
|
78 |
|
79 |
+
Users should be aware of the inherent biases and limitations of MLLMs. It is recommended to apply human review to outputs, especially in sensitive applications. Further research and evaluation are needed to fully understand and mitigate potential societal impacts.
|
|
|
|
|
80 |
|
81 |
## How to Get Started with the Model
|
82 |
|
83 |
+
To get started with inference, you can use the `transformers` library along with the provided `run_llava.py` script from the project's GitHub repository or integrate it directly using Python as shown below.
|
84 |
|
85 |
+
First, install the necessary packages as described in the [GitHub Installation section](https://github.com/aimagelab/LLaVA-MORE#installation):
|
86 |
+
```bash
|
87 |
+
conda create -n more python==3.8.16
|
88 |
+
conda activate more
|
89 |
+
pip install -r requirements.txt # Refer to the GitHub repo for the exact requirements.txt
|
90 |
+
```
|
91 |
|
92 |
+
**Using the `run_llava.py` script (recommended for full functionality):**
|
93 |
|
94 |
+
```bash
|
95 |
+
cd ~/LLaVA-MORE # Navigate to the cloned LLaVA-MORE repository
|
96 |
+
source activate more
|
97 |
+
export PYTHONPATH=.
|
98 |
|
99 |
+
model_path=aimagelab/LLaVA_MORE-phi_4-finetuning # Adjust to the specific model path
|
100 |
+
model_architecture=llava_phi # Based on config.json
|
101 |
+
conversation=phi_4 # This might vary based on tokenizer config, check original LLaVA-MORE code for best match
|
102 |
|
103 |
+
export HF_TOKEN=hf_read_token # Replace with your Hugging Face read token if needed
|
104 |
+
export TOKENIZER_PATH=$model_path
|
105 |
|
106 |
+
python -u src/llava/eval/run_llava.py --model-path $model_path --model-architecture $model_architecture --conv-mode $conversation
|
107 |
+
```
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
+
## Training Details
|
110 |
|
111 |
+
### Training Data
|
|
|
|
|
112 |
|
113 |
+
The LLaVA-MORE models are typically trained in two stages:
|
114 |
+
- **Pretraining:** On the [LCS-558K dataset](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain).
|
115 |
+
- **Finetuning:** On the [LLaVA-Instruct-665K dataset](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K).
|
116 |
|
117 |
+
### Training Procedure
|
118 |
|
119 |
+
The training employs a unified protocol consistently applied across all architectures to ensure fair comparisons and enhance reproducibility. The project publicly releases the source code and bash scripts for distributed training on HPC facilities with a SLURM scheduler. More details on the training procedure and hyperparameters can be found in the [Training section of the GitHub repository](https://github.com/aimagelab/LLaVA-MORE#training).
|
120 |
|
121 |
## Evaluation
|
122 |
|
123 |
+
### Benchmarks and Comparisons on Instruction Multimodal Datasets in the Literature
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
|
125 |
+
The table below presents the performance of LLaVA-MORE variants, including this model, compared to other LLaVA versions across various multimodal datasets. For the most up-to-date and complete evaluation results, please refer to the [Performance section in the GitHub repository](https://github.com/aimagelab/LLaVA-MORE#performance).
|
126 |
|
127 |
+
<div align="center">
|
128 |
+
<img src="https://huggingface.co/aimagelab/LLaVA_MORE-phi_4-finetuning/resolve/main/images/plot.png" width="500">
|
129 |
+
</div>
|
130 |
|
131 |
+
<div align="center">
|
132 |
|
133 |
+
| Model Name | Text-VQA* | Science-QA | AI2D | SEED-vid | SEED-all | SEED-img | MMMU | MMBench-Cn | MMBench-En | POPE | GQA | MME-P | MME-C |
|
134 |
+
|----------------------|:----------: |:------------:|:------:|:----------:|:----------:|:----------:|:------:|:------------:|:------------:|:------:|:-----:|:--------:|:-------:|
|
135 |
+
| LLaVA-v1.5-7B | 58.2 | 69.0 | 56.4 | 42.0 | 61.6 | 66.8 | 34.2 | 56.5 | 65.3 | 85.6 | 62.4 | 1474.3 | 314.6 |
|
136 |
+
| LLaVA-v1.5-LLaMA3-8B | 57.6 | 74.2 | 60.7 | 42.0 | 64.3 | 70.1 | 37.3 | 65.4 | 70.3 | 85.4 | 63.5 | 1544.4 | 330.3 |
|
137 |
+
| **LLaVA-v1.5-LLaMA3_1-8B** | 58.4 | 76.3 | 61.8 | 42.4 | 64.1 | 69.8 | 39.4 | **68.2** | 72.4 | 85.1 | 63.6 | 1531.5 | **353.3** |
|
138 |
+
| **LLaVA-v1.5-LLaMA3_1-8B-S2** | 60.9 | 76.7 | 62.2 | 42.3 | 64.2 | 69.9 | 38.7 | 65.8 | 71.1 | 86.5 | 64.5 | **1563.8** | 293.2 |
|
139 |
+
| **LLaVA-v1.5-LLaMA3_1-8B-siglip** | 62.1 | **77.5** | 63.6 | **46.1** | 65.8 | 71.0 | 39.8 | **68.2** | **73.1** | 86.1 | 64.6 | 1531.0 | 315.4 |
|
140 |
+
| **LLaVA-v1.5-LLaMA3_1-8B-S2-siglip** | 63.5 | 77.1 | 62.7 | 44.7 | 65.5 | 71.0 | **40.0** | 68.0 | 71.8 | 86.0 | 64.9 | 1541.4 | 336.4 |
|
141 |
+
| **LLaVA-v1.5-Phi_4-4B** | 54.0 | 71.3 | 61.1 | 42.3 | 63.5 | 69.1 | 38.8 | 64.2 | 69.2 | 85.9 | 62.1 | 1372.2 | 281.1 |
|
142 |
+
| **LLaVA-v1.5-gemma_2-9B** | 60.7 | 75.4 | 64.8 | 44.1 | 64.5 | 69.9 | 37.9 | 65.9 | 71.9 | **86.8** | 64.2 | 1522.5 | 307.5 |
|
143 |
+
| **LLaVA-v1.5-gemma_2-9B-siglip2** | **66.7** | 76.2 | **65.3** | 46.0 | **67.5** | **73.1** | 38.7 | 68.0 | 72.0 | 86.1 | **65.6** | 1510.9 | 308.2 |
|
144 |
+
| **LLaVA-v1.5-Distill-LLaMA-8B** | 56.3 | 74.5 | 58.8 | 43.5 | 63.5 | 68.6 | 38.1 | 66.8 | 61.3 | 85.1 | 63.0 | 1495.1 | 295.0 |
|
145 |
|
|
|
146 |
|
147 |
+
</div>
|
148 |
|
149 |
+
\* The results of TextVQA are computed with OCR token in the input prompt. **The models in bold represent LLaVA-MORE variants.**
|
150 |
|
151 |
+
## Checkpoints
|
152 |
|
153 |
+
For a complete list of all LLaVA-MORE checkpoints, you can refer to the [Hugging Face model collection](https://huggingface.co/collections/aimagelab/llava-more-66aa6c49167e190bf27e7be4).
|
154 |
|
155 |
+
## Acknowledgments
|
156 |
+
We thank the [LLaVA](https://github.com/haotian-liu/LLaVA.git) team for open-sourcing a modular codebase to extend and train different models within the LLaVA family. We are also happy users of the [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval.git) library, which has significantly reduced the evaluation time of our checkpoints across different datasets.
|
157 |
|
158 |
+
We also thank [CINECA](https://www.hpc.cineca.it/systems/hardware/leonardo/) for the availability of high-performance computing resources used to train LLaVA-MORE. This work is supported by the PNRR-M4C2 project [FAIR - Future Artificial Intelligence Research](https://fondazione-fair.it/) and by the PNRR project [ITSERR - Italian Strengthening of Esfri RI Resilience](https://www.itserr.it/).
|