MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
Abstract
We introduce MobileVLM V2, a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs' performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, our 3B model outperforms a large variety of VLMs at the 7B+ scale. Our models will be released at https://github.com/Meituan-AutoML/MobileVLM .
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices (2023)
- MouSi: Poly-Visual-Expert Vision-Language Models (2024)
- VILA: On Pre-training for Visual Language Models (2023)
- Honeybee: Locality-enhanced Projector for Multimodal LLM (2023)
- InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 0
No dataset linking this paper