AdaptLLM commited on
Commit
90397c1
·
verified ·
1 Parent(s): 37cfbcf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -3
README.md CHANGED
@@ -1,3 +1,145 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - visual-question-answering
4
+ language:
5
+ - en
6
+ tags:
7
+ - Vision
8
+ - remote-sensing
9
+ configs:
10
+ - config_name: CLRS
11
+ data_files:
12
+ - split: test
13
+ path: clrs/data-*.arrow
14
+ - config_name: UC_Merced
15
+ data_files:
16
+ - split: test
17
+ path: UCMerced/data-*.arrow
18
+ - config_name: FloodNet
19
+ data_files:
20
+ - split: test
21
+ path: floodnet/data-*.arrow
22
+ - config_name: NWPU-Captions
23
+ data_files:
24
+ - split: test
25
+ path: NWPU/data-*.arrow
26
+ ---
27
+
28
+ # Adapting Multimodal Large Language Models to Domains via Post-Training
29
+
30
+ This repos contains the **remote sensing visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
31
+
32
+ The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains)
33
+
34
+ ## 1. Download Data
35
+ You can load datasets using the `datasets` library:
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ # Choose the task name from the list of available tasks
40
+ task_name = 'CLRS' # Options: 'CLRS', 'UC_Merced', 'FloodNet', 'NWPU-Captions'
41
+
42
+ # Load the dataset for the chosen task
43
+ data = load_dataset('AdaptLLM/remote-sensing-VQA-benchmark', task_name, split='test')
44
+
45
+ print(list(data)[0])
46
+ ```
47
+
48
+ The mapping between category names and indices for 'CLRS', 'UC_Merced' is:
49
+
50
+ ```python3
51
+ # CLRS
52
+ label_to_name_map = {'0': 'agricultural', '1': 'airplane', '2': 'baseball diamond', '3': 'beach', '4': 'buildings',
53
+ '5': 'chaparral', '6': 'dense residential', '7': 'forest', '8': 'freeway', '9': 'golf course', '10': 'harbor', '11': 'intersection', '12': 'medium residential', '13': 'mobile home park', '14': 'overpass', '15': 'parking lot', '16': 'river',
54
+ '17': 'runway', '18': 'sparse residential', '19': 'storage tanks', '20': 'tennis court'}
55
+
56
+ # UC_Merced
57
+ label_to_name_map = {'0': 'agricultural', '1': 'airplane', '2': 'baseball diamond', '3': 'beach', '4': 'buildings',
58
+ '5': 'chaparral', '6': 'dense residential', '7': 'forest', '8': 'freeway', '9': 'golf course', '10': 'harbor', '11': 'intersection', '12': 'medium residential', '13': 'mobile home park', '14': 'overpass', '15': 'parking lot', '16': 'river',
59
+ '17': 'runway', '18': 'sparse residential', '19': 'storage tanks', '20': 'tennis court'}
60
+ ```
61
+
62
+ ## 2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks
63
+
64
+ We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 ([open-source version](https://huggingface.co/Lin-Chen/open-llava-next-llama3-8b)), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct.
65
+ To evaluate other MLLMs, refer to [this guide](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py) for modifying the `BaseTask` class in the [vllm_inference/utils/task.py](https://github.com/bigai-ai/QA-Synthesizer/blob/main/vllm_inference/utils/task.py) file.
66
+ Feel free reach out to us for assistance!
67
+
68
+ **The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.**
69
+
70
+ ### 1) Setup
71
+
72
+ Install vLLM using `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source).
73
+
74
+ As recommended in the official vLLM documentation, install vLLM in a **fresh new** conda environment:
75
+
76
+ ```bash
77
+ conda create -n vllm python=3.10 -y
78
+ conda activate vllm
79
+ pip install vllm # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient.
80
+ ```
81
+
82
+ Clone the repository and navigate to the inference directory:
83
+
84
+ ```bash
85
+ git clone https://github.com/bigai-ai/QA-Synthesizer.git
86
+ cd QA-Synthesizer/vllm_inference
87
+ RESULTS_DIR=./eval_results # Directory for saving evaluation scores
88
+ ```
89
+
90
+ ### 2) Evaluate
91
+
92
+ Run the following commands:
93
+
94
+ ```bash
95
+ # Specify the domain: choose from ['remote-sensing', 'CLRS', 'UC_Merced', 'FloodNet', 'NWPU-Captions']
96
+ # 'remote-sensing' runs inference on all food tasks; others run on individual tasks.
97
+ DOMAIN='remote-sensing'
98
+
99
+ # Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama']
100
+ # For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively.
101
+ MODEL_TYPE='qwen2_vl'
102
+
103
+ # Set the model repository ID on Hugging Face. Examples:
104
+ # "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/remote-sensing-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct.
105
+ # "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/remote-sensing-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct.
106
+ # "AdaptLLM/remote-sensing-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6.
107
+ MODEL=AdaptLLM/remote-sensing-Qwen2-VL-2B-Instruct
108
+
109
+ # Set the directory for saving model prediction outputs:
110
+ OUTPUT_DIR=./output/AdaMLLM-remote-sensing-Qwen-2B_${DOMAIN}
111
+
112
+ # Run inference with data parallelism; adjust CUDA devices as needed:
113
+ CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
114
+ ```
115
+
116
+ Detailed scripts to reproduce our results are in [Evaluation.md](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Evaluation.md)
117
+
118
+ ### 3) Results
119
+ The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.
120
+
121
+
122
+ ## Citation
123
+ If you find our work helpful, please cite us.
124
+
125
+ [AdaMLLM](https://huggingface.co/papers/2411.19930)
126
+ ```bibtex
127
+ @article{adamllm,
128
+ title={On Domain-Specific Post-Training for Multimodal Large Language Models},
129
+ author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
130
+ journal={arXiv preprint arXiv:2411.19930},
131
+ year={2024}
132
+ }
133
+ ```
134
+
135
+ [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
136
+ ```bibtex
137
+ @inproceedings{
138
+ cheng2024adapting,
139
+ title={Adapting Large Language Models via Reading Comprehension},
140
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
141
+ booktitle={The Twelfth International Conference on Learning Representations},
142
+ year={2024},
143
+ url={https://openreview.net/forum?id=y886UXPEZ0}
144
+ }
145
+ ```