|
--- |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
## MiniCPM-V |
|
**MiniCPM-V** is an efficient version with promising performance for deployment. The model is built based on MiniCPM-2.4B and SigLip-400M, connected by a perceiver resampler. Notable features of MiniCPM-V include: |
|
|
|
- 🚀 **High Efficiency.** |
|
|
|
MiniCPM-V can be **efficiently deployed on most GPU cards and personal computers**, and **even on edge devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows MiniCPM-V to operate with **much less memory cost and higher speed during inference**. |
|
|
|
- 🔥 **Promising Performance.** |
|
|
|
MiniCPM-V achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**. |
|
|
|
- 🙌 **Bilingual Support.** |
|
|
|
MiniCPM-V is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038). |
|
|
|
<div align="center"> |
|
|
|
<table style="margin: 0px auto;"> |
|
<thead> |
|
<tr> |
|
<th align="left">Model</th> |
|
<th>Size</th> |
|
<th>MME</th> |
|
<th nowrap="nowrap" >MMB dev (en)</th> |
|
<th nowrap="nowrap" >MMB dev (zh)</th> |
|
<th nowrap="nowrap" >MMMU val</th> |
|
<th nowrap="nowrap" >CMMMU val</th> |
|
</tr> |
|
</thead> |
|
<tbody align="center"> |
|
<tr> |
|
<td align="left">LLaVA-Phi</td> |
|
<td align="right">3.0B</td> |
|
<td>1335</td> |
|
<td>59.8</td> |
|
<td>- </td> |
|
<td>- </td> |
|
<td>- </td> |
|
</tr> |
|
<tr> |
|
<td nowrap="nowrap" align="left">MobileVLM</td> |
|
<td align="right">3.0B</td> |
|
<td>1289</td> |
|
<td>59.6</td> |
|
<td>- </td> |
|
<td>- </td> |
|
<td>- </td> |
|
</tr> |
|
<tr> |
|
<td nowrap="nowrap" align="left" >Imp-v1</td> |
|
<td align="right">3B</td> |
|
<td>1434</td> |
|
<td>66.5</td> |
|
<td>- </td> |
|
<td>- </td> |
|
<td>- </td> |
|
</tr> |
|
<tr> |
|
<td align="left" >Qwen-VL-Chat</td> |
|
<td align="right" >9.6B</td> |
|
<td>1487</td> |
|
<td>60.6 </td> |
|
<td>56.7 </td> |
|
<td>35.9 </td> |
|
<td>30.7 </td> |
|
</tr> |
|
<tr> |
|
<td nowrap="nowrap" align="left" ><b>MiniCPM-V</b></td> |
|
<td align="right">3B </td> |
|
<td>1452 </td> |
|
<td>67.3 </td> |
|
<td>61.9 </td> |
|
<td>34.7 </td> |
|
<td>32.1 </td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
</div> |
|
|
|
<table> |
|
<tr> |
|
<td> |
|
<p> |
|
<img src="data/Mushroom_en.gif" width="400"/> |
|
</p> |
|
</td> |
|
<td> |
|
<p> |
|
<img src="data/Snake_en.gif" width="400"/> |
|
</p> |
|
</td> |
|
</tr> |
|
</table> |
|
|
|
## Demo |
|
Click here to try out the Demo of [MiniCPM-V](http://120.92.209.146:80). |
|
|
|
|
|
## Usage |
|
|
|
```python |
|
import torch |
|
from PIL import Image |
|
from transformers import AutoModel, AutoTokenizer |
|
|
|
model = AutoModel.from_pretrained('openbmb/MiniCPM-V/', trust_remote_code=True) |
|
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True) |
|
model.eval().cuda() |
|
|
|
image = Image.open('xx.jpg').convert('RGB') |
|
question = '请描述一下该图像' |
|
|
|
res, context, _ = model.chat( |
|
image=image, |
|
question=question, |
|
context=None, |
|
tokenizer=tokenizer, |
|
sampling=True, |
|
temperature=0.7 |
|
) |
|
print(res) |
|
``` |
|
|