File size: 11,731 Bytes
4ce7387
 
db29485
4ce7387
 
 
 
 
 
 
db29485
4ce7387
 
4723c19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ce7387
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4723c19
4ce7387
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db29485
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
---
license: other
license_name: kanana
license_link: LICENSE
language:
- ko
- en
base_model:
- kakaocorp/kanana-1.5-v-3b-instruct
pipeline_tag: image-text-to-text
library_name: transformers
---

<p align="center">
<br>
    <picture>
        <img src="./assets/logo/kanana-logo.png" width="60%" style="margin: 40px auto;">
    </picture>
</br>

<p align="center">
๐Ÿค— <a href="https://kko.kakao.com/kananallm">1.5 HF Models</a> &nbsp |
&nbsp ๐Ÿ“• <a href="https://tech.kakao.com/posts/714">Blog</a> &nbsp

<br>

## Table of Contents

- [Kanana-1.5-v-3b-instruct](#kanana-15-v-3b-instruct)
- [Intended Use](#intended-use)
- [Model Details](#model-details)
- [Evaluation](#evaluation)
  - [Model Configuration Summary](#model-configuration-summary)
  - [Overview](#overview)
  - [Image Benchmarks (EN)](#image-benchmarks-en)
  - [Image Benchmarks (KO)](#image-benchmarks-ko)
  - [Multimodal Instruction Following Benchmarks (EN, KO)](#multimodal-instruction-following-benchmarks-en-ko)
  - [Note on Benchmarking Methodology](#note-on-benchmarking-methodology)
- [Usage](#usage)
  - [Requirements](#requirements)
  - [Quickstart](#quickstart)
- [Limitations](#limitations)
- [Contributors](#contributors)
- [Contact](#contact)

<br>


# kanana-1.5-v-3b-instruct

The Unified Foundation Model (UFO) task force of Kanana at Kakao developed and released the Kanana-V family of multimodal large language models (MLLMs), a collection of pretrained text/image-to-text (TI2T) models.



## Intended Use

kanana-1.5-v-3b-instruct is intended for research and application development in multimodal understanding and text generation tasks. Typical use cases include image captioning, document understanding, OCR-based reasoning, and multimodal instruction following in both English and Korean. The model is optimized for both general-purpose and Korea-specific benchmarks, making it suitable for bilingual environments.




## Model Details

- **Developed by:** Unified Foundation Model (UFO) TF at Kakao
- **Language(s) :** ['en', 'ko']
- **Model Architecture:** kanana-1.5-v-3b-instruct has 3.6B parameters and contains image encoder, C-abstractor, and kanana-1.5-3b-instruct language model.
- **Input:** The models accept text and image inputs.
- **Output:** The models generate text only.
- **Context Length:** 32k
- **Knowledge Cutoff Date:** June 30, 2024
- **Model Release Date:** Jul 24, 2025.
- **License:** kanana-license
 



## Evaluation

### Model Configuration Summary

| Model                      | LLM                              | Total Parameter |
|----------------------------|----------------------------------|-----------|
| **kanana-1.5-v-3b-instruct**        | kanana-1.5-3b-instruct  | 3.67B     |
| HCX-SEED-Vision-3B         | HyperCLOVAX-SEED-Text-Base-3B    | 3.72B     |
| Phi-3-Vision               | Phi-3-Mini                       | 4.15B     |
| Qwen2.5-VL-3B-Instruct     | Qwen2.5-3B                       | 3.75B     |
| InternVL2.5-4B             | Qwen2.5-3B-Instruct              | 3.94B     |

### Overview

| Model                      | All    | Image (EN) | Image (KO) | IF (EN, KO) |
|----------------------------|--------|------------|------------|-------------|
| **kanana-1.5-v-3b-instruct**        | 73.22  | 74.00      | 68.27      | 77.39       |
| HCX-SEED-Vision-3B         | 59.00  | 64.81      | 51.96      | 60.23       |
| Phi-3-Vision               | 48.84  | 65.41      | 36.40      | 44.71       |
| Qwen2.5-VL-3B-Instruct     | 63.54  | 73.97      | 60.60      | 56.04       |
| InternVL2.5-4B             | 61.35  | 74.73      | 54.68      | 54.63       |

### Image Benchmarks (EN)

| Model                      | average | MMMU (Val) | MathVista | DocVQA | ChartQA | OCRBench | InfoVQA | TextVQA | RealWorldQA | MMStar | MMB   | SEED-image | MMVet | LLaVA-Wild | scienceqa | AI2D  |
|----------------------------|--------------|------------|-----------|--------|---------|----------|---------|---------|-------------|--------|-------|------------|-------|------------|-----------|-------|
| **kanana-1.5-v-3b-instruct**            | 74.00        | 43.89      | 56.00     | 93.06  | 81.20   | 82.50    | 73.62   | 78.62   | 65.36       | 56.32  | 78.44 | 75.17      | 65.87 | 89.60      | 95.61     | 74.81 |
| HCX-SEED-Vision-3B         | 64.81        | 38.89      | 47.40     | 79.87  | 71.88   | 62.90    | 55.59   | 73.51   | 62.48       | 46.66  | 72.42 | 74.84      | 47.27 | 79.30      | 86.84     | 72.31 |
| Phi-3-Vision               | 65.41        | 45.33      | 43.60     | 87.04  | 81.40   | 63.60    | 54.80   | 69.61   | 59.08       | 47.47  | 73.37 | 71.69      | 45.96 | 70.40      | 90.84     | 76.98 |
| Qwen2.5-VL-3B-Instruct     | 73.97        | 50.67      | 62.00     | 94.19  | 83.60   | 79.10    | 77.22   | 77.77   | 59.74       | 56.26  | 77.75 | 74.83      | 61.06 | 96.90      | 79.69     | 78.79 |
| InternVL2.5-4B             | 74.73        | 52.33      | 61.80     | 92.13  | 82.76   | 79.20    | 69.73   | 78.24   | 62.88       | 59.72  | 81.96 | 75.59      | 61.38 | 86.30      | 97.14     | 79.83 |


### Image Benchmarks (KO)

| Model                      | average | KoOCRBench | KoMMDBench | KoChartTask | KoMathSolution | KoCosMed | KoFoodMenu | KoEntity | KoExam | KoCelebV2 |
|----------------------------|--------------|----------------------|------------|-------------|----------------|----------|------------|----------|--------|-----------|
| **kanana-1.5-v-3b-instruct**            | 68.27        | 85.93                | 74.00      | 84.96       | 36.88          | 87.58    | 70.84      | 72.04    | 58.99  | 43.24     |
| HCX-SEED-Vision-3B         | 51.96        | 32.91                | 64.57      | 73.55       | 27.88          | 78.16    | 57.08      | 64.12    | 31.82  | 37.58     |
| Phi-3-Vision               | 36.40        | 25.13                | 37.93      | 52.36       | 38.75          | 56.75    | 34.70      | 31.71    | 24.05  | 26.25     |
| Qwen2.5-VL-3B-Instruct     | 60.60        | 50.67                | 61.75      | 84.96       | 47.13          | 82.01    | 66.32      | 58.15    | 60.68  | 33.72     |
| InternVL2.5-4B             | 54.68        | 20.52                | 62.65      | 82.61       | 46.50          | 82.66    | 65.09      | 50.42    | 47.43  | 34.23     |

### Multimodal Instruction Following Benchmarks (EN, KO)

| Model                      | average      | MIABench | MIABench-Ko | MM-IFEval | MM-OmniAlign |
|----------------------------|--------------|----------|-------------|-----------|--------------|
| **kanana-1.5-v-3b-instruct**            | 77.39        | 90.28    | 91.17       | 56.67     | 71.43        |
| HCX-SEED-Vision-3B         | 60.23        | 85.81    | 81.80       | 47.91     | 25.40        |
| Phi-3-Vision               | 44.71        | 85.78    | 38.35       | 44.37     | 10.32        |
| Qwen2.5-VL-3B-Instruct     | 56.04        | 82.55    | 59.61       | 39.14     | 42.86        |
| InternVL2.5-4B             | 54.63        | 85.68    | 68.35       | 43.06     | 21.43        |



### Note on Benchmarking Methodology

All benchmarks were re-measured under identical software conditions to ensure fair comparison.

- **[VLMEvalKit](https://github.com/open-compass/VLMEvalKit)** was used for MMMU, MathVista, ScienceQA, MIA-Bench, MM-IFEval and MM-OmniAlign.

- **[lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval)** was employed for DocVQA, ChartQA, OCRBench, InfoVQA, TextVQA, RealWorldQA, MMStar, MMB, and SEED-image.

- HCX-SEED-Vision-3B was evaluated without the use of any auxiliary tools (e.g., external OCR engines or Lens features), as such tools are not publicly available and therefore not included in our evaluation setup.

- **Important note for ChartQA**: It was identified that the original rule-based parser used by lmms-eval marked answers ending with a period (".") as incorrect due to parsing issues. To address this, the parser logic was modified to remove any trailing period before parsing the response. All ChartQA evaluations presented here reflect results obtained after applying this parser adjustment.


The following in-house benchmarks evaluate Korean-language tasks and Korea-specific knowledge:

| Benchmark | Purpose |
|-----------|---------|
| **KoOCRBench** | Korean character recognition (OCR) |
| **KoMMDBench**, **KoEntity**, **KoCelebV2** | Korean knowledge & cultural visual QA |
| **KoFoodMenu**, **KoCosMed** | Korean text-based visual QA |
| **KoChartTask** | Chart understanding in Korean |
| **KoExam**, **KoMathSolution** | Multimodal Problem-solving in Korean (general exams & mathematics) |
| **MIABench-Ko** | Korean multimodal instruction-following benchmark (derived from MIABench) |



## Usage

### Requirements

```
pip install transformers accelerate timm omegaconf
```
`transformers>=4.45.0` or the latest version is recommended.

### Quickstart

The following is a code snippet that briefly demonstrates how to load a model and process input data using the `AutoClass` from `transformers`.
```python
from PIL import Image
import torch
from transformers import AutoModelForVision2Seq, AutoProcessor

MODEL = "kakaocorp/kanana-1.5-v-3b-instruct"

# Load the model on the available device(s)
model = AutoModelForVision2Seq.from_pretrained(
    MODEL,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
model.eval()

# Load processor
processor = AutoProcessor.from_pretrained(MODEL, trust_remote_code=True)

# Prepare input batch
batch = []
for _ in range(1):  # dummy loop to demonstrate batch processing
    image_files = [
        "./examples/waybill.png"
    ]

    sample = {
        "image": [Image.open(image_file_path).convert("RGB") for image_file_path in image_files],
        "conv": [
            {"role": "system", "content": "The following is a conversation between a curious human and AI assistant."},
            {"role": "user", "content": " ".join(["<image>"] * len(image_files))},
            {"role": "user", "content": "์‚ฌ์ง„์—์„œ ๋ณด๋‚ด๋Š” ์‚ฌ๋žŒ๊ณผ ๋ฐ›๋Š” ์‚ฌ๋žŒ ์ •๋ณด๋ฅผ json ํ˜•ํƒœ๋กœ ์ •๋ฆฌํ•ด์ค˜."},
        ]
    }

    batch.append(sample)
    
inputs = processor.batch_encode_collate(
    batch, padding_side="left", add_generation_prompt=True, max_length=8192
)
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}

# Set the generation config
gen_kwargs = {
    "max_new_tokens": 2048,
    "temperature": 0,
    "top_p": 1.0,
    "num_beams": 1,
    "do_sample": False,
}

# Generate text
gens = model.generate(
    **inputs,
    **gen_kwargs,
)
text_outputs = processor.tokenizer.batch_decode(gens, skip_special_tokens=True)
print(text_outputs)  # ['```json\n{\n  "๋ณด๋‚ด๋Š”๋ถ„": {\n    "์„ฑ๋ช…": "์นด์นด์˜ค",\n    "์ฃผ์†Œ": "๊ฒฝ๊ธฐ๋„ ์„ฑ๋‚จ์‹œ ํŒ๊ต์—ญ๋กœ 166"\n  },\n  "๋ฐ›๋Š”๋ถ„": {\n    "์„ฑ๋ช…": "์นด๋‚˜๋‚˜",\n    "์ฃผ์†Œ": "์ œ์ฃผ๋„ ์ œ์ฃผ์‹œ ์ฒจ๋‹จ๋กœ 242"\n  }\n}\n```']
```



## Limitations

- The model may generate inaccurate or misleading content, especially in scenarios requiring precise factual understanding (e.g., scientific diagrams or mathematical reasoning).
- Performance on languages other than Korean and English has not been evaluated and may be poor.
- The model is not designed for medical, legal, or other high-stakes domains.
- The model may reflect social biases present in the pretraining data.



## Contributors
- Beomhee Park, Byeonguk Bae, Byungseok Roh, Daejin Jo, Donghee Son, Dongjin Lee, Hyunwoong Ko, Jaemyung Lee, Jeehye Lee, Sunghun Kang, Wooyoung Kang
- Listed in alphabetical order (first name)



## Contact
- Kanana MLLM Core Team Technical Support: [email protected]
- Business & Partnership Contact: [email protected]