File size: 3,497 Bytes
e0456e9 688f720 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
pipeline_tag: text-classification
---
## MiniCPM-V
**MiniCPM-V** is an efficient version with promising performance for deployment. The model is built based on MiniCPM-2.4B and SigLip-400M, connected by a perceiver resampler. Notable features of MiniCPM-V include:
- 🚀 **High Efficiency.**
MiniCPM-V can be **efficiently deployed on most GPU cards and personal computers**, and **even on edge devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows MiniCPM-V to operate with **much less memory cost and higher speed during inference**.
- 🔥 **Promising Performance.**
MiniCPM-V achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
- 🙌 **Bilingual Support.**
MiniCPM-V is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>MME</th>
<th nowrap="nowrap" >MMB dev (en)</th>
<th nowrap="nowrap" >MMB dev (zh)</th>
<th nowrap="nowrap" >MMMU val</th>
<th nowrap="nowrap" >CMMMU val</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">LLaVA-Phi</td>
<td align="right">3.0B</td>
<td>1335</td>
<td>59.8</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MobileVLM</td>
<td align="right">3.0B</td>
<td>1289</td>
<td>59.6</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Imp-v1</td>
<td align="right">3B</td>
<td>1434</td>
<td>66.5</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td align="left" >Qwen-VL-Chat</td>
<td align="right" >9.6B</td>
<td>1487</td>
<td>60.6 </td>
<td>56.7 </td>
<td>35.9 </td>
<td>30.7 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-V</b></td>
<td align="right">3B </td>
<td>1452 </td>
<td>67.3 </td>
<td>61.9 </td>
<td>34.7 </td>
<td>32.1 </td>
</tr>
</tbody>
</table>
</div>
<table>
<tr>
<td>
<p>
<img src="data/Mushroom_en.gif" width="400"/>
</p>
</td>
<td>
<p>
<img src="data/Snake_en.gif" width="400"/>
</p>
</td>
</tr>
</table>
## Demo
Click here to try out the Demo of [MiniCPM-V](http://120.92.209.146:80).
## Usage
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V/', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True)
model.eval().cuda()
image = Image.open('xx.jpg').convert('RGB')
question = '请描述一下该图像'
res, context, _ = model.chat(
image=image,
question=question,
context=None,
tokenizer=tokenizer,
sampling=True,
temperature=0.7
)
print(res)
```
|