Adapting Multimodal Large Language Models to Domains via Post-Training
This repos contains the biomedicine MLLM developed from InternVL3-1B in our paper: On Domain-Adaptive Post-Training for Multimodal Large Language Models. The correspoding training dataset is in biomed-visual-instructions.
The main project page is: Adapt-MLLM-to-Domains
1. To Chat with AdaMLLM
Our model architecture aligns with the base model: InternVL3-1B, so you can refer to the official OpenGVLab/InternVL3-1B for the usage instructions.
Note: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
2. Domain-Specific Benchmarks
We provide biomed-VQA-benchmark for evaluating MLLMs on domain-specific tasks.
3. To Reproduce this Domain-Adapted MLLM
Using our training data, biomed-visual-instructions, you can easily reproduce our models based on the LlamaFactory repository.
For reference, we train from OpenGVLab/InternVL3-1B-hf (note that we train from the -hf
version) for 1 epoch with a learning rate of 1e-5 and a global batch size of 128.
After training, follow this instruction to convert the hf
version back to the official OpenGVLab/InternVL3-1B version.
Citation
If you find our work helpful, please cite us.
@article{adamllm,
title={On Domain-Specific Post-Training for Multimodal Large Language Models},
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
journal={arXiv preprint arXiv:2411.19930},
year={2024}
}
Adapt LLM to Domains (ICLR 2024)
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
- Downloads last month
- 8
Model tree for AdaptLLM/biomed-InternVL3-1B
Base model
OpenGVLab/InternVL3-1B-Pretrained