Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
pipeline_tag: image-feature-extraction
|
4 |
+
base_model: TokenOCR
|
5 |
+
base_model_relation: finetune
|
6 |
+
---
|
7 |
+
|
8 |
+
<center>
|
9 |
+
|
10 |
+
<h1 style="color: black;">A Token-level Text Image Foundation Model for Document Understanding</h1>
|
11 |
+
|
12 |
+
|
13 |
+
[\[📂 GitHub\]](https://github.com/Token-family/TokenOCR) [\[📖 Paper\]]() [\[🆕 Blog\]]() [\[🤗 HF Demo\]](https://huggingface.co/spaces/TongkunGuan/TokenOCR) [\[🚀 Quick Start\]](#quick-start)
|
14 |
+
|
15 |
+
</center>
|
16 |
+
|
17 |
+
<!-- <div align="center">
|
18 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/dQ_JfK_I91WXzIq52D015.png">
|
19 |
+
</div> -->
|
20 |
+
|
21 |
+
<center>
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
<!-- # Introduction -->
|
27 |
+
<h2 style="color: #4CAF50;">Introduction</h2>
|
28 |
+
|
29 |
+
</center>
|
30 |
+
|
31 |
+
We are excited to announce the release of **`TokenOCR`**, the first token-level visual foundation model specifically tailored for text-image-related tasks,
|
32 |
+
designed to support a variety of traditional downstream applications. To facilitate the pretraining of TokenOCR,
|
33 |
+
we also devise a high-quality data production pipeline that constructs the first token-level image text dataset,
|
34 |
+
**`TokenIT`**, comprising 20 million images and 1.8 billion token-mask pairs.
|
35 |
+
Furthermore, leveraging this foundation with exceptional image-as-text capability,
|
36 |
+
we seamlessly replace previous VFMs with TokenOCR to construct a document-level MLLM, **`TokenVL`**, for VQA-based document understanding tasks.
|
37 |
+
|
38 |
+
<center>
|
39 |
+
|
40 |
+
<!-- # Token Family -->
|
41 |
+
<h2 style="color: #4CAF50;">Token Family</h2>
|
42 |
+
|
43 |
+
</center>
|
44 |
+
|
45 |
+
<!-- ## TokenIT -->
|
46 |
+
<h2 style="color: #4CAF50;">TokenIT</h2>
|
47 |
+
|
48 |
+
In the following picture, we provide an overview of the self-constructed token-level **TokenIT** dataset, comprising 20 million images and 1.8 billion
|
49 |
+
text-mask pairs.
|
50 |
+
|
51 |
+
As depicted in Figure 2 (a), each sample in this dataset includes a raw image, a mask image, and a JSON file.
|
52 |
+
The JSON file provides the question-answer pairs and several BPE tokens randomly selected from the answer, along with
|
53 |
+
the ordinal number of each BPE token in the answer and its corresponding pixel value on the mask image. Consequently,
|
54 |
+
**each BPE token corresponds one-to-one with a pixel-level mask**.
|
55 |
+
The data ratios are summarized in Figure 2 (b). Figure 2 (c) and (d) further provide the number distribution
|
56 |
+
of tokens per image type and a word cloud of the top 100 tokens, respectively.
|
57 |
+
|
58 |
+
<div align="center">
|
59 |
+
<img width="1000" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/WcQwU3-xjyT5Vm-pZhACo.png">
|
60 |
+
</div>
|
61 |
+
|
62 |
+
<!--  -->
|
63 |
+
|
64 |
+
The comparisons with other visual foundation models:
|
65 |
+
|
66 |
+
| VFM | Granularity | Dataset | #Image | #Pairs |
|
67 |
+
|:-------------------|:------------|:---------|:------:|:------:|
|
68 |
+
| [CLIP](https://github.com/openai/CLIP) | image-level | WIT400M | 400M | 0.4B |
|
69 |
+
| [DINO](https://github.com/facebookresearch/dino) | image-level | ImageNet | 14M | - |
|
70 |
+
| [SAM](https://github.com/facebookresearch/SAM) | pixel-level | SA1B | 11M | 1.1B |
|
71 |
+
| **TokenOCR** | **token-level** | **TokenIT** | **20M** | **1.8B** |
|
72 |
+
|
73 |
+
|
74 |
+
<!-- ## TokenOCR
|
75 |
+
-->
|
76 |
+
<h2 style="color: #4CAF50;">TokenOCR</h2>
|
77 |
+
|
78 |
+
### Model Architecture
|
79 |
+
|
80 |
+
An overview of the proposed TokenOCR, where the token-level image features and token-level language
|
81 |
+
features are aligned within the same semantic space. This “image-as-text” alignment seamlessly facilitates user-interactive
|
82 |
+
applications, including text segmentation, retrieval, and visual question answering.
|
83 |
+
|
84 |
+
<div align="center">
|
85 |
+
<img width="1000" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/QTsvWxFJFTnISdhvbfZhD.png">
|
86 |
+
</div>
|
87 |
+
|
88 |
+
### Model Cards
|
89 |
+
|
90 |
+
In the following table, we provide all models [🤗 link] of the TokenOCR series.
|
91 |
+
|
92 |
+
| Model Name | Description |
|
93 |
+
| :-----------------------: | :-------------------------------------------------------------------: |
|
94 |
+
| TokenOCR-4096-English | feature dimension is 4096; support interactive with English texts.|
|
95 |
+
| TokenOCR-4096-Chinese | feature dimension is 4096; support interactive with Chinese texts. |
|
96 |
+
| TokenOCR-2048-Bilingual | feature dimension is 4096; support interactive with English and Chinese texts. |
|
97 |
+
| TokenOCR-4096-English-seg | On `TokenOCR-4096-English`, background noise is filtered out. You can use prompt ' ' to get a highlight background. |
|
98 |
+
|
99 |
+
### Quick Start
|
100 |
+
|
101 |
+
> \[!Warning\]
|
102 |
+
> 🚨 Note: In our experience, the `TokenOCR-2048-Bilingual` series is better suited for building MLLMs than the `-seg` version.
|
103 |
+
|
104 |
+
```python
|
105 |
+
import os
|
106 |
+
import torch
|
107 |
+
from safetensors.torch import load_file
|
108 |
+
from transformers import AutoTokenizer
|
109 |
+
from internvl.model.internvl_chat import InternVLChatConfig, InternVisionModel
|
110 |
+
from utils import post_process, generate_similiarity_map, load_model, load_image
|
111 |
+
|
112 |
+
checkpoint = 'TongkunGuan/TokenOCR_4096_English_seg'
|
113 |
+
image_path = './demo_images/0000000.png'
|
114 |
+
input_query = '11/12/2020'
|
115 |
+
out_dir = 'results'
|
116 |
+
|
117 |
+
if not os.path.exists(out_dir):
|
118 |
+
os.makedirs(out_dir, exist_ok=True)
|
119 |
+
|
120 |
+
"""loading model, tokenizer, tok_embeddings """
|
121 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True, use_fast=False)
|
122 |
+
model = InternVLChatModel.from_pretrained(checkpoint, low_cpu_mem_usage=True, torch_dtype=torch.bfloat16,load_in_8bit=args.load_in_8bit, load_in_4bit=args.load_in_4bit).eval()
|
123 |
+
model = model.cuda()
|
124 |
+
|
125 |
+
"""loading image """
|
126 |
+
pixel_values, images, target_aspect_ratio = load_image(image_path)
|
127 |
+
|
128 |
+
|
129 |
+
"""loading query texts """
|
130 |
+
if input_query[0] in '!"#$%&\'()*+,-./0123456789:;<=>?@^_{|}~0123456789':
|
131 |
+
input_ids = tokenizer(input_query)['input_ids'][1:]
|
132 |
+
else:
|
133 |
+
input_ids = tokenizer(' '+input_query)['input_ids'][1:]
|
134 |
+
input_ids = torch.Tensor(input_ids).long().to(model.device)
|
135 |
+
input_embeds = tok_embeddings(input_ids).clone()
|
136 |
+
all_bpe_strings = [tokenizer.decode(input_id) for input_id in input_ids]
|
137 |
+
|
138 |
+
|
139 |
+
"""Obtaining similarity """
|
140 |
+
vit_embeds = model.forward_tokenocr(pixel_values.to(model.device)) #(vit_batch_size, 16*16, 2048)
|
141 |
+
vit_embeds_local, resized_size = post_process(vit_embeds, target_aspect_ratio)
|
142 |
+
token_features = vit_embeds_local / vit_embeds_local.norm(dim=-1, keepdim=True)
|
143 |
+
input_embedings = input_embeds / input_embeds.norm(dim=-1, keepdim=True)
|
144 |
+
similarity = input_embedings @ token_features.t()
|
145 |
+
attn_map = similarity.reshape(len(input_embedings), resized_size[0], resized_size[1])
|
146 |
+
|
147 |
+
"""generate map locally """
|
148 |
+
generate_similiarity_map(images, attn_map, all_bpe_strings, out_dir, target_aspect_ratio)
|
149 |
+
|
150 |
+
"""user command """
|
151 |
+
# python quick_start.py
|
152 |
+
|
153 |
+
```
|
154 |
+
|
155 |
+
### Evaluation on Vision Capability
|
156 |
+
|
157 |
+
We present a comprehensive evaluation of the vision encoder’s performance across various domains and tasks.
|
158 |
+
The evaluation is divided into two key categories:
|
159 |
+
|
160 |
+
(1) text retrial;
|
161 |
+
(2) image segmentation;
|
162 |
+
(3) visual question answering;
|
163 |
+
|
164 |
+
This approach allows us to assess the representation quality of TokenOCR.
|
165 |
+
Please refer to our technical report for more details.
|
166 |
+
|
167 |
+
#### text retrial
|
168 |
+
|
169 |
+
<div align="left">
|
170 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/b2b2g23o9GMmPe1PiCn0f.png">
|
171 |
+
</div>
|
172 |
+
|
173 |
+
|
174 |
+
<!--  -->
|
175 |
+
|
176 |
+
#### image segmentation
|
177 |
+
|
178 |
+
<div align="left">
|
179 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/C15-Ica6XVfX6y_MgiVds.png">
|
180 |
+
</div>
|
181 |
+
|
182 |
+
<!--  -->
|
183 |
+
|
184 |
+
#### visual question answering
|
185 |
+
|
186 |
+
<div align="left">
|
187 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/IbLZ0CxCxDkTaHAMe7M0Q.png">
|
188 |
+
</div>
|
189 |
+
|
190 |
+
<!-- 
|
191 |
+
-->
|
192 |
+
|
193 |
+
<!-- ## TokenVL -->
|
194 |
+
<h2 style="color: #4CAF50;">TokenVL</h2>
|
195 |
+
|
196 |
+
we employ the TokenOCR as the visual foundation model and further develop an MLLM, named TokenVL, tailored for document understanding.
|
197 |
+
Following the previous training paradigm, TokenVL also includes two stages:
|
198 |
+
|
199 |
+
**Stage 1: LLM-guided Token Alignment Training for text parsing tasks.**
|
200 |
+
|
201 |
+
<div align="center">
|
202 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/gDr1fQg7I1nTIsiRWNHTr.png">
|
203 |
+
</div>
|
204 |
+
|
205 |
+
The framework of LLM-guided Token Alignment Training. Existing MLLMs primarily enhance spatial-wise text perception capabilities by integrating localization prompts to predict coordinates. However, this implicit
|
206 |
+
method makes it difficult for these models to have a precise understanding.
|
207 |
+
In contrast, the proposed token alignment uses BPE token masks to directly and explicitly align text with corresponding pixels in the input image, enhancing the MLLM’s localization awareness.
|
208 |
+
|
209 |
+
**Stage 2: Supervised Instruction Tuning for VQA tasks.**
|
210 |
+
|
211 |
+
During the Supervised Instruction Tuning stage, we cancel the token alignment branch as answers may not appear in the image for some reasoning tasks
|
212 |
+
(e.g., How much taller is the red bar compared to the green bar?). This also ensures no computational overhead during inference to improve the document understanding capability. Finally, we inherit the
|
213 |
+
remaining weights from the LLM-guided Token Alignment and unfreeze all parameters to facilitate comprehensive parameter updates.
|
214 |
+
|
215 |
+
### OCRBench Results
|
216 |
+
|
217 |
+
<div align="center">
|
218 |
+
<img width="1300" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/DZej5Ogpho3wpZC4KVAMO.png">
|
219 |
+
</div>
|
220 |
+
|
221 |
+
### Document Understanding Results
|
222 |
+
|
223 |
+
<div align="center">
|
224 |
+
<img width="1300" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/650d4a36cbd0c7d550d3b41b/Msfs1YkDQHq2-djhm6QqD.png">
|
225 |
+
</div>
|
226 |
+
|
227 |
+
|
228 |
+
|
229 |
+
## License
|
230 |
+
|
231 |
+
This project is released under the MIT License.
|
232 |
+
|
233 |
+
## Citation
|
234 |
+
|
235 |
+
If you find this project useful in your research, please consider citing:
|
236 |
+
|
237 |
+
```BibTeX
|
238 |
+
|
239 |
+
```
|