Model
llava-internlm-7b-pretrain is a LLaVA projector pretrained with InternLM-Chat-7B and CLIP-ViT-Large-patch14-336 on LLaVA-Pretrain dataset by XTuner. The fine-tuned LLaVA model can be found on xtuner/llava-internlm-7b.
Citation
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.