metadata
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL2_5-8B
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.1
language:
- multilingual
tags:
- internvl
- custom_code
InternVL2_5-8B-MPO-hf
This model is huggingface version which can be recognized as AutoModel
, which originates from 🤗InternVL2_5-8B-MPO.
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory.
Main Stream will support it soon! Now this PR may help you fine-tune Internvl series for now. :)
Thanks to yonigozlan for providing model adaptation for InternVL and wrapping the processor in the Transformers package—this is a massive undertaking. At least for me, this processor had been troubling me for a long time. :(