Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
Files changed (1) hide show
  1. README.md +272 -272
README.md CHANGED
@@ -1,273 +1,273 @@
1
-
2
- <div align="center">
3
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
-
5
- <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
- <p align="center">
7
- <a href="https://generalist.top/">[πŸ“– Project]</a>
8
- <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
- <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
- <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
- <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
- </p>
13
-
14
-
15
- </div>
16
-
17
-
18
- ---
19
- We divide our benchmark into two settings: **`open`** and **`closed`**.
20
-
21
- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
- It allows researchers to train and evaluate their models with access to the answers.
23
-
24
- If you wish to thoroughly evaluate your model's performance, please use the
25
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
-
27
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
28
-
29
-
30
- <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
31
-
32
- You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
33
-
34
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
35
-
36
-
37
- If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
38
-
39
-
40
-
41
-
42
-
43
-
44
-
45
-
46
-
47
- ---
48
-
49
- ## πŸ“• Table of Contents
50
-
51
- - [✨ File Origanization Structure](#filestructure)
52
- - [🍟 Usage](#usage)
53
- - [🌐 General-Bench](#bench)
54
- - [πŸ• Capabilities and Domians Distribution](#distribution)
55
- - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
56
- - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
57
- - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
58
- - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
59
- - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
60
-
61
-
62
-
63
-
64
-
65
- ---
66
-
67
- <span id='filestructure'/>
68
-
69
- # ✨✨✨ **File Origanization Structure**
70
-
71
- Here is the organization structure of the file system:
72
-
73
- ```
74
- General-Bench
75
- β”œβ”€β”€ Image
76
- β”‚ β”œβ”€β”€ comprehension
77
- β”‚ β”‚ β”œβ”€β”€ Bird-Detection
78
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
79
- β”‚ β”‚ β”‚ └── images
80
- β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
81
- β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
82
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
83
- β”‚ β”‚ β”‚ └── images
84
- β”‚ β”‚ └── ...
85
- β”‚ └── generation
86
- β”‚ └── Layout-to-Face-Image-Generation
87
- β”‚ β”œβ”€β”€ annotation.json
88
- β”‚ └── images
89
- β”‚ └── ...
90
- β”œβ”€β”€ Video
91
- β”‚ β”œβ”€β”€ comprehension
92
- β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
93
- β”‚ β”‚ β”œβ”€β”€ annotation.json
94
- β”‚ β”‚ └── videos
95
- β”‚ β”‚ └── ...
96
- β”‚ └── generation
97
- β”‚ └── Scene-Image-to-Video-Generation
98
- β”‚ β”œβ”€β”€ annotation.json
99
- β”‚ └── videos
100
- β”‚ └── ...
101
- β”œβ”€β”€ 3d
102
- β”‚ β”œβ”€β”€ comprehension
103
- β”‚ β”‚ └── 3D-Furniture-Classification
104
- β”‚ β”‚ β”œβ”€β”€ annotation.json
105
- β”‚ β”‚ └── pointclouds
106
- β”‚ β”‚ └── ...
107
- β”‚ └── generation
108
- β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
109
- β”‚ β”œβ”€β”€ annotation.json
110
- β”‚ └── pointclouds
111
- β”‚ └── ...
112
- β”œβ”€β”€ Audio
113
- β”‚ β”œβ”€β”€ comprehension
114
- β”‚ β”‚ └── Accent-Classification
115
- β”‚ β”‚ β”œβ”€β”€ annotation.json
116
- β”‚ β”‚ └── audios
117
- β”‚ β”‚ └── ...
118
- β”‚ └── generation
119
- β”‚ └── Video-To-Audio
120
- β”‚ β”œβ”€β”€ annotation.json
121
- β”‚ └── audios
122
- β”‚ └── ...
123
- β”œβ”€β”€ NLP
124
- β”‚ β”œβ”€β”€ History-Question-Answering
125
- β”‚ β”‚ └── annotation.json
126
- β”‚ β”œβ”€β”€ Abstractive-Summarization
127
- β”‚ β”‚ └── annotation.json
128
- β”‚ └── ...
129
-
130
- ```
131
-
132
-
133
- An illustrative example of file formats:
134
-
135
-
136
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
137
-
138
-
139
- <span id='usage'/>
140
-
141
- ## 🍟🍟🍟 Usage
142
-
143
- Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
144
-
145
- xxxx
146
-
147
-
148
- ---
149
-
150
-
151
-
152
-
153
-
154
- <span id='bench'/>
155
-
156
-
157
-
158
- # 🌐🌐🌐 **General-Bench**
159
-
160
-
161
-
162
-
163
- A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
164
-
165
- <div align="center">
166
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
167
- <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
168
- comprehension and generation categories in various modalities</p>
169
- </div>
170
-
171
-
172
- <span id='distribution'/>
173
-
174
- ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
175
-
176
- <div align="center">
177
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
178
- <p> Distribution of various capabilities evaluated in General-Bench.</p>
179
- </div>
180
-
181
-
182
- <div align="center">
183
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
184
- <p>Distribution of various domains and disciplines covered by General-Bench.</p>
185
- </div>
186
-
187
-
188
-
189
-
190
-
191
- <span id='imagetaxonomy'/>
192
-
193
- # πŸ–ΌοΈ Image Task Taxonomy
194
- <div align="center">
195
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
196
- <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
197
- </div>
198
-
199
-
200
-
201
-
202
- <span id='videotaxonomy'/>
203
-
204
- # πŸ“½οΈ Video Task Taxonomy
205
-
206
- <div align="center">
207
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
208
- <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
209
- </div>
210
-
211
-
212
-
213
-
214
-
215
-
216
-
217
-
218
-
219
- <span id='audiotaxonomy'/>
220
-
221
- # πŸ“ž Audio Task Taxonomy
222
-
223
-
224
-
225
- <div align="center">
226
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
227
- <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
228
- </div>
229
-
230
-
231
-
232
- <span id='3dtaxonomy'/>
233
-
234
- # πŸ’Ž 3D Task Taxonomy
235
-
236
-
237
- <div align="center">
238
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
239
- <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
240
- </div>
241
-
242
-
243
-
244
-
245
- <span id='languagetaxonomy'/>
246
-
247
- # πŸ“š Language Task Taxonomy
248
-
249
- <div align="center">
250
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
251
- <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
252
- </div>
253
-
254
-
255
-
256
-
257
- ---
258
-
259
-
260
-
261
-
262
- # 🚩 **Citation**
263
-
264
- If you find our benchmark useful in your research, please kindly consider citing us:
265
-
266
- ```
267
- @article{generalist2025,
268
- title={On Path to Multimodal Generalist: Levels and Benchmarks},
269
- author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
270
- journal={arXiv},
271
- year={2025}
272
- }
273
  ```
 
1
+
2
+ <div align="center">
3
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
+
5
+ <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
+ <p align="center">
7
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
8
+ <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
+ <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
+ </p>
13
+
14
+
15
+ </div>
16
+
17
+
18
+ ---
19
+ We divide our benchmark into two settings: **`open`** and **`closed`**.
20
+
21
+ This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
+ It allows researchers to train and evaluate their models with access to the answers.
23
+
24
+ If you wish to thoroughly evaluate your model's performance, please use the
25
+ [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
+
27
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
28
+
29
+
30
+ <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
31
+
32
+ You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
33
+
34
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
35
+
36
+
37
+ If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+ ---
48
+
49
+ ## πŸ“• Table of Contents
50
+
51
+ - [✨ File Origanization Structure](#filestructure)
52
+ - [🍟 Usage](#usage)
53
+ - [🌐 General-Bench](#bench)
54
+ - [πŸ• Capabilities and Domians Distribution](#distribution)
55
+ - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
56
+ - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
57
+ - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
58
+ - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
59
+ - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
60
+
61
+
62
+
63
+
64
+
65
+ ---
66
+
67
+ <span id='filestructure'/>
68
+
69
+ # ✨✨✨ **File Origanization Structure**
70
+
71
+ Here is the organization structure of the file system:
72
+
73
+ ```
74
+ General-Bench
75
+ β”œβ”€β”€ Image
76
+ β”‚ β”œβ”€β”€ comprehension
77
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
78
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
79
+ β”‚ β”‚ β”‚ └── images
80
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
81
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
82
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
83
+ β”‚ β”‚ β”‚ └── images
84
+ β”‚ β”‚ └── ...
85
+ β”‚ └── generation
86
+ β”‚ └── Layout-to-Face-Image-Generation
87
+ β”‚ β”œβ”€β”€ annotation.json
88
+ β”‚ └── images
89
+ β”‚ └── ...
90
+ β”œβ”€β”€ Video
91
+ β”‚ β”œβ”€β”€ comprehension
92
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
93
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
94
+ β”‚ β”‚ └── videos
95
+ β”‚ β”‚ └── ...
96
+ β”‚ └── generation
97
+ β”‚ └── Scene-Image-to-Video-Generation
98
+ β”‚ β”œβ”€β”€ annotation.json
99
+ β”‚ └── videos
100
+ β”‚ └── ...
101
+ β”œβ”€β”€ 3d
102
+ β”‚ β”œβ”€β”€ comprehension
103
+ β”‚ β”‚ └── 3D-Furniture-Classification
104
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
105
+ β”‚ β”‚ └── pointclouds
106
+ β”‚ β”‚ └── ...
107
+ β”‚ └── generation
108
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
109
+ β”‚ β”œβ”€β”€ annotation.json
110
+ β”‚ └── pointclouds
111
+ β”‚ └── ...
112
+ β”œβ”€β”€ Audio
113
+ β”‚ β”œβ”€β”€ comprehension
114
+ β”‚ β”‚ └── Accent-Classification
115
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
116
+ β”‚ β”‚ └── audios
117
+ β”‚ β”‚ └── ...
118
+ β”‚ └── generation
119
+ β”‚ └── Video-To-Audio
120
+ β”‚ β”œβ”€β”€ annotation.json
121
+ β”‚ └── audios
122
+ β”‚ └── ...
123
+ β”œβ”€β”€ NLP
124
+ β”‚ β”œβ”€β”€ History-Question-Answering
125
+ β”‚ β”‚ └── annotation.json
126
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
127
+ β”‚ β”‚ └── annotation.json
128
+ β”‚ └── ...
129
+
130
+ ```
131
+
132
+
133
+ An illustrative example of file formats:
134
+
135
+
136
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
137
+
138
+
139
+ <span id='usage'/>
140
+
141
+ ## 🍟🍟🍟 Usage
142
+
143
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
144
+
145
+ xxxx
146
+
147
+
148
+ ---
149
+
150
+
151
+
152
+
153
+
154
+ <span id='bench'/>
155
+
156
+
157
+
158
+ # 🌐🌐🌐 **General-Bench**
159
+
160
+
161
+
162
+
163
+ A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
164
+
165
+ <div align="center">
166
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
167
+ <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
168
+ comprehension and generation categories in various modalities</p>
169
+ </div>
170
+
171
+
172
+ <span id='distribution'/>
173
+
174
+ ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
175
+
176
+ <div align="center">
177
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
178
+ <p> Distribution of various capabilities evaluated in General-Bench.</p>
179
+ </div>
180
+
181
+
182
+ <div align="center">
183
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
184
+ <p>Distribution of various domains and disciplines covered by General-Bench.</p>
185
+ </div>
186
+
187
+
188
+
189
+
190
+
191
+ <span id='imagetaxonomy'/>
192
+
193
+ # πŸ–ΌοΈ Image Task Taxonomy
194
+ <div align="center">
195
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
196
+ <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
197
+ </div>
198
+
199
+
200
+
201
+
202
+ <span id='videotaxonomy'/>
203
+
204
+ # πŸ“½οΈ Video Task Taxonomy
205
+
206
+ <div align="center">
207
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
208
+ <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
209
+ </div>
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+ <span id='audiotaxonomy'/>
220
+
221
+ # πŸ“ž Audio Task Taxonomy
222
+
223
+
224
+
225
+ <div align="center">
226
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
227
+ <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
228
+ </div>
229
+
230
+
231
+
232
+ <span id='3dtaxonomy'/>
233
+
234
+ # πŸ’Ž 3D Task Taxonomy
235
+
236
+
237
+ <div align="center">
238
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
239
+ <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
240
+ </div>
241
+
242
+
243
+
244
+
245
+ <span id='languagetaxonomy'/>
246
+
247
+ # πŸ“š Language Task Taxonomy
248
+
249
+ <div align="center">
250
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
251
+ <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
252
+ </div>
253
+
254
+
255
+
256
+
257
+ ---
258
+
259
+
260
+
261
+
262
+ # 🚩 **Citation**
263
+
264
+ If you find our benchmark useful in your research, please kindly consider citing us:
265
+
266
+ ```
267
+ @article{generalist2025,
268
+ title={On Path to Multimodal Generalist: Levels and Benchmarks},
269
+ author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
270
+ journal={arXiv},
271
+ year={2025}
272
+ }
273
  ```