AdaptLLM's picture
Update README.md
409f037 verified
---
task_categories:
- visual-question-answering
language:
- en
tags:
- Vision
- remote-sensing
configs:
- config_name: CLRS
data_files:
- split: test
path: CLRS/data-*.arrow
- config_name: UC_Merced
data_files:
- split: test
path: UCMerced/data-*.arrow
- config_name: FloodNet
data_files:
- split: test
path: floodnet/data-*.arrow
- config_name: NWPU-Captions
data_files:
- split: test
path: NWPU/data-*.arrow
---
# Adapting Multimodal Large Language Models to Domains via Post-Training
This repos contains the **remote sensing visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains)
## 1. Download Data
You can load datasets using the `datasets` library:
```python
from datasets import load_dataset
# Choose the task name from the list of available tasks
task_name = 'CLRS' # Options: 'CLRS', 'UC_Merced', 'FloodNet', 'NWPU-Captions'
# Load the dataset for the chosen task
data = load_dataset('AdaptLLM/remote-sensing-VQA-benchmark', task_name, split='test')
print(list(data)[0])
```
The mapping between category names and indices for 'CLRS', 'UC_Merced' is:
```python3
# CLRS
label_to_name_map = {'0': 'agricultural', '1': 'airplane', '2': 'baseball diamond', '3': 'beach', '4': 'buildings',
'5': 'chaparral', '6': 'dense residential', '7': 'forest', '8': 'freeway', '9': 'golf course', '10': 'harbor', '11': 'intersection', '12': 'medium residential', '13': 'mobile home park', '14': 'overpass', '15': 'parking lot', '16': 'river',
'17': 'runway', '18': 'sparse residential', '19': 'storage tanks', '20': 'tennis court'}
# UC_Merced
label_to_name_map = {'0': 'agricultural', '1': 'airplane', '2': 'baseball diamond', '3': 'beach', '4': 'buildings',
'5': 'chaparral', '6': 'dense residential', '7': 'forest', '8': 'freeway', '9': 'golf course', '10': 'harbor', '11': 'intersection', '12': 'medium residential', '13': 'mobile home park', '14': 'overpass', '15': 'parking lot', '16': 'river',
'17': 'runway', '18': 'sparse residential', '19': 'storage tanks', '20': 'tennis court'}
```
## 2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks
We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 ([open-source version](https://huggingface.co/Lin-Chen/open-llava-next-llama3-8b)), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct.
To evaluate other MLLMs, refer to [this guide](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py) for modifying the `BaseTask` class in the [vllm_inference/utils/task.py](https://github.com/bigai-ai/QA-Synthesizer/blob/main/vllm_inference/utils/task.py) file.
Feel free reach out to us for assistance!
**The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.**
### 1) Setup
Install vLLM using `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source).
As recommended in the official vLLM documentation, install vLLM in a **fresh new** conda environment:
```bash
conda create -n vllm python=3.10 -y
conda activate vllm
pip install vllm # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient.
```
Clone the repository and navigate to the inference directory:
```bash
git clone https://github.com/bigai-ai/QA-Synthesizer.git
cd QA-Synthesizer/vllm_inference
RESULTS_DIR=./eval_results # Directory for saving evaluation scores
```
### 2) Evaluate
Run the following commands:
```bash
# Specify the domain: choose from ['remote-sensing', 'CLRS', 'UC_Merced', 'FloodNet', 'NWPU-Captions']
# 'remote-sensing' runs inference on all food tasks; others run on individual tasks.
DOMAIN='remote-sensing'
# Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama']
# For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively.
MODEL_TYPE='qwen2_vl'
# Set the model repository ID on Hugging Face. Examples:
# "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/remote-sensing-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct.
# "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/remote-sensing-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct.
# "AdaptLLM/remote-sensing-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6.
MODEL=AdaptLLM/remote-sensing-Qwen2-VL-2B-Instruct
# Set the directory for saving model prediction outputs:
OUTPUT_DIR=./output/AdaMLLM-remote-sensing-Qwen-2B_${DOMAIN}
# Run inference with data parallelism; adjust CUDA devices as needed:
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
```
Detailed scripts to reproduce our results are in [Evaluation.md](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Evaluation.md)
### 3) Results
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.
## Citation
If you find our work helpful, please cite us.
[AdaMLLM](https://huggingface.co/papers/2411.19930)
```bibtex
@article{adamllm,
title={On Domain-Specific Post-Training for Multimodal Large Language Models},
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
journal={arXiv preprint arXiv:2411.19930},
year={2024}
}
```
[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```