Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type list<item: string> to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2005, in cast_array_to_feature
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2006, in <listcomp>
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2102, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1950, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type list<item: string> to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

images_list
sequence
context
string
question
string
answer
int64
meta
dict
id
int64
[ "0d610d932fe436d475048dbcd91a4f83cf92103a48fbe7e48d7cd4cda056d4a6.jpeg", "cddfee8e48a078e2bcde39df89805d57eae84273636e7c6b1a90fb1915a1.png", "a14c03b051e04763dda8ca30c577223e407842ae3cccf4fc7f798f089018.png", "7705361e83490bf504c7fe998ec1f43a6c79a63e6f92eea84d0192094a56.png", "83ff6215fd675e9e30525112ced977...
<image> As you're working on a climate plan for your city, there are some traps. Number one: Don't get lost in the numbers! Filter the measures that have the most impact and start working with the data you already have. Discover 4 more tips. 💪 Don't get lost in the numbers 🔢 As experts, we know that there is often a ...
Given the first image with the lower right corner missing, can you tell which of the following images is the missing part?
0
{ "placed_depth": [ 0 ], "context_length": 85513, "context_length_text": 12553, "num_images": 32, "needles": [ "0d610d932fe436d475048dbcd91a4f83cf92103a48fbe7e48d7cd4cda056d4a6.jpeg" ], "choices": null, "choices_image_path": [ "72c962296e491f69c11ac601fe35bedd5627a3b816d974534e0ea551bb581f...
0
[ "2815c4bab4bf220d4175fa22a6b96a55bd41fdc369ddbfd6d6cc59c04e17.png", "3366a0b5cca57fed7c4f47f3e5313d2e9579c9786f8d252551f33e73783d.png", "ac5d4ede865f33e92ce0bd530a4ebd56a463c02925183b00af9144dab2e3.png", "49e7aee5f468f1d895a381777b8e74b5f2a092eda14ff36a9443fce9c6aaa3b3.jpeg", "093ceb9942b15c1b8cb6ac2cd6b821...
<image> Last Sunday, Mother's Day, I noted that I'd much rather preach the annual canvas sermon to my congregation than the annual Mothers' Day sermon. (I said, "Nobody wants to talk about money, but some people actually like their mothers . . ." I said that on Mothers' Day it's harder to find the right balance.) But t...
Given the fourth image with the lower right corner missing, can you tell which of the following images is the missing part?
1
{ "placed_depth": [ 0.07764265668849392 ], "context_length": 106733, "context_length_text": 18157, "num_images": 42, "needles": [ "49e7aee5f468f1d895a381777b8e74b5f2a092eda14ff36a9443fce9c6aaa3b3.jpeg" ], "choices": null, "choices_image_path": [ "5a697e8c61ba6debce38bd5edac6783f8d5a1f0a3df...
1
["774eb64ac32f9c9fc8244f275cfb85b94ab927949138a4c2b1faaf9eb3ecbe68.jpeg","62739941fa07e77ef0e3651789(...TRUNCATED)
"<image>\nForeword\nDr Michael Rumsewicz,Editor-in-Chief, Australian Journal of Emergency Management(...TRUNCATED)
"Given the first image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
0
{"placed_depth":[0.0],"context_length":95292,"context_length_text":11324,"num_images":42,"needles":[(...TRUNCATED)
2
["a2c05adc349613b1e2d662828907358cb675feb0d415c764445ec3895cec.png","e574879f0c75da0131b3f59220d3715(...TRUNCATED)
"<image>\nWhilst the narrative change is welcome, it all seems to be happening at the same time, in (...TRUNCATED)
"Given the tenth image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
1
{"placed_depth":[0.15993788819875776],"context_length":115690,"context_length_text":13802,"num_image(...TRUNCATED)
3
["46b3a43f4712753606abcfafe1fd21b25a75e133b9e1efbf74d9ec47cd13.png","e89eb753a3e0399ddf2bd54d930995d(...TRUNCATED)
"Monday 25th January to Tuesday 16th February\n“Welcome to Paradise” we have been told many tim(...TRUNCATED)
"Given the third image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
0
{"placed_depth":[0.08203125],"context_length":77583,"context_length_text":8975,"num_images":34,"need(...TRUNCATED)
4
["df6d43bf5a0e9747863c25d03a1abaefc3340daca4aaa2942df7813edfe4.png","3e6a8cc82d8f261a00064cc3de737ef(...TRUNCATED)
"<image>\nWe’re in the last day of meaningful Summer League games, at least however meaningful a S(...TRUNCATED)
"Given the second image with the lower right corner missing, can you tell which of the following ima(...TRUNCATED)
0
{"placed_depth":[0.02359882005899705],"context_length":86428,"context_length_text":9628,"num_images"(...TRUNCATED)
5
["c7e539ff3f5c782cc747b2152d5af243ae5b643fd7ff3edbf59f0837247a.png","ab7f03721733d5e684506c473aebb3c(...TRUNCATED)
"<image>\nThe port is located at the strait of Gibraltar. Due to economical development and the clos(...TRUNCATED)
"Given the third image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
0
{"placed_depth":[0.05506883604505632],"context_length":106879,"context_length_text":11391,"num_image(...TRUNCATED)
6
["c760fe684dbe74719ad33703460e1d517c10de42fd26c25955299abcd8c6.png","4c60425a8be9d9409033038f97fda78(...TRUNCATED)
"Congress passed the Civil Rights Act in 1964. I was born the following year. In the early days of m(...TRUNCATED)
"Given the third image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
0
{"placed_depth":[0.1037037037037037],"context_length":105705,"context_length_text":14569,"num_images(...TRUNCATED)
7
["b3b7512b80f91a1b114995f793439b53eff98351662fe80d4a9813ad8ef3.png","4eb6e2934300393a3c254e8531f8651(...TRUNCATED)
"Gretchen Mangahas\n<image>\nChange Management, Training and Communication Lead\nToronto Catholic Di(...TRUNCATED)
"Given the fifth image with the lower right corner missing, can you tell which of the following imag(...TRUNCATED)
0
{"placed_depth":[0.13266583229036297],"context_length":64702,"context_length_text":10942,"num_images(...TRUNCATED)
8
["864bb8ec316348bbdd4e313a4cb0bf34150eea723327b46645f266486dfd.png","05ead14ca82cfe367a2702fc8bcd684(...TRUNCATED)
"Saudi Arabia Refutes Allegations of Funding Extremism in Pakistan\n<image>\nThe Saudi embassy has i(...TRUNCATED)
"Given the twelfth image with the lower right corner missing, can you tell which of the following im(...TRUNCATED)
0
{"placed_depth":[0.18896164639850327],"context_length":117448,"context_length_text":8904,"num_images(...TRUNCATED)
9
End of preview.

V2PE-Data

[📂 GitHub] [🆕 Blog] [📜 Paper] [🤗 HF Models]

image.png

Summary

We introduce two augmented long-context multimodal datasets: Long Visual Question Answering and Long multimodal Retrieval. These datasets aim to enhance VLMs' long-context training and establish a systematic evaluation framework, thereby addressing the challenges associated with long-context understanding that extend beyond the scope of existing training data.

image.png

  • Long Visual Question Answering (Long-VQA): The Long-VQA dataset aims to evaluate the capabilities of VLMs in understanding and reasoning over long multimodal sequences within general visual question-answering tasks. We extended 17 widely adopted datasets (e.g., DocVQA, GQA, SQA), expanding their content from short sequences to those containing up to 32K tokens. The tasks involve answering questions that require commonsense reasoning, factual knowledge, and interpretation of visual information from charts, documents, and real-world texts. Long-VQA contains 533K samples: 392K for training (up to 32K tokens) and 141K for validation (up to 64K tokens) to evaluate the generalization to longer contexts.

image.png

  • Long Multimodal Retrieval (Long-MR): we developed Long-MR by inserting a target image or textual segment into sequences of interleaved images and texts. Long-MR evaluates VLMs' ability to retrieve specific targets from ultra-long multimodal sequences, requiring models to locate the inserted "needle" and answer associated questions. We generated two subsets of Long-MR: Long-MR-32K (488K samples, sequences up to 32K tokens) and Long-MR-256K (50K samples, sequences up to 256K tokens), following the data construction process of MM-NIAH. To assess the limits of VLMs' long-context capabilities, we further extend the official MM-NIAH evaluation benchmark by generating testing samples with sequence lengths ranging from 64K to 1M tokens, resulting in the MM-NIAH-1M benchmark. This extension pushes the testing capacity beyond the original MM-NIAH, which was limited to sequences of up to 64K tokens.

image.png

Please refer to our paper for more details.

Evaluation Results of Released Model

General MLLM Benchmarks

Model #Param ChartQA DocVQA AI2D InfoVQA SQA POPE MMMUval MMBenchEN SEEDI Avg
InternVL2-2B 2.0B 71.7 86.9 74.1 58.9 94.1 85.2 36.3 73.4 70.9 72.4
DeepSeek-VL-1.3B 2.0B 47.4 - 51.5 - 68.4 85.9 33.8 66.4 66.0 -
Qwen2-VL-2B 2.0B 73.5 90.1 74.7 65.5 - - 41.1 74.9 - -
Aquila-VL-2B 2.2B 32.0 85.0 75.1 58.3 95.1 83.1 46.9 79.0 73.9 69.8
MiniCPM-V-2 2.8B 55.6 71.9 62.9 - 80.7 86.3 38.2 64.1 67.1 -
Vintern-3B-beta 3.7B 68.3 - 69.1 - 75.0 87.4 46.7 70.6 70.0 -
Llama 3.2 11B 11B 83.4 88.4 91.1 - - - 50.7 68.0 - -
Qwen2-VL-72B 73B 88.3 96.5 88.1 84.5 91.2 87.2 64.5 86.9 77.9 85.0
GPT-4o - 85.7 92.8 84.7 - 90.1 97.2 69.1 82.1 76.7 -
InternVL2-V2PE-32K 2.0B 76.4 83.9 73.2 55.9 94.9 88.8 36.6 73.5 71.2 72.5

Long-Context MLLM Benchmarks

Model #Param MM-NIAH/Image MM-NIAH/Text MM-NIAH/Avg Milebench/T Milebench/S Milebench/NI Milebench/Avg VideoMME MVBench
InternVL2-2B 2.0B 23.0 18.9 21.0 58.2 54.5 37.0 49.9 - -
Phi-3-Vision 2.7B - - - 46.9 50.0 - - - -
OmChat 3.9B - - - 51.4 52.0 - - 45.9 50.2
LongLLaVA 9B - - - 47.3 46.8 - - 43.7 49.1
LongLLaVA 13B - - - 52.7 52.1 - - 51.6 54.6
VILA 13B 14.5 40.5 27.5 - - - - - -
Gemini-1.5 - 28.5 82.1 55.2 50.2 58.3 97.9 68.8 69.6 -
GPT-4V - - 84.1 - 45.6 58.9 99.4 68.0 59.9 43.5
GPT-4o - - - - 56.2 63.5 - - 64.7 -
Claude3-Opus - - - - 37.4 48.1 85.3 56.9 59.7 -
InternVL2-V2PE-32K 2.0B 78.1 85.7 81.8 65.5 56.4 97.2 72.5 50.7 65.6

Usage

Please refer to our GitHub Repo.

Citation

If you find this work helpful in your research, please consider citing:

@misc{ge2024v2peimprovingmultimodallongcontext,
      title={V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding}, 
      author={Junqi Ge and Ziyi Chen and Jintao Lin and Jinguo Zhu and Xihui Liu and Jifeng Dai and Xizhou Zhu},
      year={2024},
      eprint={2412.09616},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.09616}, 
}
``
Downloads last month
1,039

Models trained or fine-tuned on OpenGVLab/V2PE-Data

Collection including OpenGVLab/V2PE-Data

Paper for OpenGVLab/V2PE-Data