Datasets:
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: wiki_id
dtype: int32
- name: paragraph_id
dtype: int32
- name: images
sequence:
- name: caption
dtype: string
- name: image
dtype: image
- name: type
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 58037298060.96
num_examples: 42482460
download_size: 47531941595
dataset_size: 58037298060.96
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- text-generation
- visual-document-retrieval
- sentence-similarity
- visual-question-answering
language:
- en
tags:
- retrieval-augmented-generation
- RAG
- multimodal
- vision-language
pretty_name: WikiFragments
size_categories:
- 10M<n<100M
WikiFragments
WikiFragments is a multimodal dataset built from Wikipedia (en), consisting of cleaned textual paragraphs paired with related images (infobox and thumbnail) from the same page. Each pair forms a multimodal fragment, which serves as an atomic knowledge unit ideal for information retrieval and multimodal research.
- Fragment with four images and captions
- Fragment with only text and no associated images
The images above were generated from two separate rows in the dataset using the
FragmentCreator
, which converts them into stand‑alone images. You can use the same code to reproduce this representation. In our paper, we employed this representation to encode fragments with ColQwen2 for multimodal retrieval.
Dataset Details
To construct this dataset, we modified the wikiextractor tool to extract and clean paragraphs from every page in the English Wikipedia. We preserved hyperlinks in the text and, when available, retrieved images from infoboxes and thumbnails. Each image is associated with its respective paragraph according to the order in which it appears in the HTML source of the page, along with its original caption.
Images are retrieved at the lower resolution used for webpage rendering,
extracted from the Kiwix full Wikipedia dump (ZIM file, January 2024).
This approach reduces the overall dataset size.
We define a multimodal fragment as follows:
A multimodal fragment is an atomic unit of information consisting of a paragraph from a Wikipedia page and all images that, in the page’s source code, appear above that paragraph.
Dataset Description
Paragraphs are cleaned using the standard wikiextractor
logic. For each paragraph, we store:
The paragraph text
The corresponding Wikipedia page name and URL
The list of associated images as PIL objects
The image URLs and captions
The sequential index of the paragraph within the page
Curated by: Nicola Fanelli (PhD Student @ University of Bari Aldo Moro, Italy)
Language(s) (NLP): English
License
- Code: MIT License.
- Text Data: The Wikipedia text is licensed under CC BY-SA 4.0. When using this dataset, you must provide proper attribution to Wikipedia and its contributors and share any derivatives under the same license.
- Images: Images are sourced from Wikipedia and Wikimedia Commons. Each image is subject to its own license, which is typically indicated on its original page. Users of this dataset are responsible for ensuring they comply with the licensing terms of individual images.
Dataset Sources
All content originates from Wikipedia (en). Any use of this dataset must comply with Wikipedia’s copyright policies.
- Repository:
wikiextractor
fork - Paper: ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval
Uses
This dataset is designed for use in retrieval tasks, particularly in retrieval-augmented generation (RAG), to provide relevant multimodal context for answering questions.
In our paper, we generate visual representations of each multimodal fragment—images resembling a rendered PDF with the paragraph at the bottom, images at the top, and captions aligned to the right. These are then encoded using multi-vector multimodal representations with ColPali.
The code for generating these multimodal fragment images (such as the ones provided in the examples above) is available here in the official repository of our paper.
Since ColPali only supports text queries, and our goal was to enable multimodal (image + text) queries, we also propose a novel technique in our paper to extend the model’s capabilities to handle multimodal queries without additional training.
Direct Use
This dataset is suitable for research and development in multimodal retrieval, especially in retrieval-augmented generation (RAG) systems. It can be used to evaluate methods that require paired image-text information or test architectures for multimodal representation learning. The dataset supports tasks such as:
- Multimodal dense retrieval
- Multimodal pretraining and evaluation
- Document understanding (e.g., question answering over richly formatted content)
- Benchmarking multimodal in-context learning approaches
Out-of-Scope Use
The dataset is not suitable for:
- Real-time systems requiring up-to-date information, as it is based on a static Wikipedia snapshot
- Legal, medical, or financial applications where factual accuracy and source traceability are critical
- Training or evaluating systems that treat the dataset as if it contains original or copyright-cleared media; users must respect the licensing of individual images
- Commercial use of the data without verifying licenses and complying with Wikipedia and Wikimedia Commons terms
Dataset Structure
Each data point is a multimodal fragment containing:
id
: Sequential paragraph identifier.title
: Title of the corresponding Wikipedia page.text
: Cleaned paragraph text with embedded links. Links and URLs are preserved for potential future use.url
: URL of the corresponding Wikipedia page.wiki_id
: Unique identifier of the Wikipedia page.paragraph_id
: Sequential paragraph identifier within the corresponding page.images
: A dictionary containing:caption
: List of captions for each associated image.image
: List of PIL image objects linked to the paragraph.type
: List of strings ("infobox"
or"thumb"
) indicating the image type.url
: List of internal URLs for accessing the images via the Kiwix dump.
Currently, there are no predefined train/validation/test splits.
Users may create custom splits based on page domains, topics, or other criteria.
For example, in the ArtSeek code, we automatically navigate Wikipedia navboxes up to 5 levels deep to select only pages containing fragments related to the visual arts domain. You can apply the same approach to your domain of interest using the select_category_pages
function available in this file.
Example:
select_category_pages("Category:Visual arts", 5)
Dataset Statistics
- Number of paragraphs: 42,482,460
- Number of paragraphs with at least one associated image: 2,254,123
- Total number of images: 2,499,977
- Average number of images per paragraph: 1.109
- Maximum number of images in a single paragraph: 125
Most image-associated paragraphs contain only a single image. The frequency of paragraphs decreases as the number of associated images increases, following an inverse proportionality.
Dataset Creation
Curation Rationale
The dataset was created to provide a high-quality multimodal benchmark composed of Wikipedia's rich textual and visual information. It serves as a research resource for advancing multimodal retrieval and generative models by offering paragraph-image pairs grounded in encyclopedic knowledge.
Source Data
All text and image content is sourced from the English Wikipedia and Wikimedia Commons via the Kiwix ZIM dump.
Data Collection and Processing
- Text was extracted using a modified version of
wikiextractor
, keeping internal links and paragraph ordering. - Images were parsed from HTML infoboxes and thumbnail references, then downloaded using the Kiwix offline Wikipedia dump.
- Images were linked to the paragraph below them in the HTML structure.
- Captions were extracted from the HTML metadata.
- The final dataset was assembled by matching paragraphs and their corresponding images.
Paragraphs are sourced from the Wikipedia dump dated 2025-08-01.
Images are sourced from the full Kiwix dump (ZIM file, January 2024).
Who are the source data producers?
The text was authored by contributors to the English Wikipedia. Images were contributed by various users to Wikimedia Commons and are subject to individual licenses. No demographic or identity metadata is available for content creators.
Annotations
There are no manual annotations beyond the original captions associated with images from Wikipedia pages.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
To the best of our knowledge, the dataset does not contain personal or sensitive information. Wikipedia is a public knowledge source with moderation and community standards aimed at excluding personal data. However, users are advised to verify content if used in sensitive contexts.
Bias, Risks, and Limitations
As the dataset is derived from Wikipedia, it inherits potential biases found in Wikipedia articles, including:
- Coverage bias (overrepresentation of certain regions, topics, or demographics)
- Editorial bias (reflecting the views of more active editor groups)
- Visual bias (images may be selected or framed subjectively)
Additionally:
- Not all Wikipedia pages contain relevant or aligned images
- Image licenses may vary and require individual attribution or restrictions
Recommendations
Users should be aware of and account for:
- The need to verify and respect licensing terms of individual images
- The inherited biases from Wikipedia contributors and editorial processes
- The fact that the dataset reflects a snapshot in time and is not updated in real-time
- Limitations in using this dataset for safety-critical or fact-sensitive applications
Citation
BibTeX:
@article{fanelli2025artseek,
title={ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval},
author={Fanelli, Nicola and Vessio, Gennaro and Castellano, Giovanna},
journal={arXiv preprint arXiv:2507.21917},
year={2025}
}
APA:
Fanelli, N., Vessio, G., & Castellano, G. (2025). ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval. arXiv preprint arXiv:2507.21917.
Dataset Card Authors
Nicola Fanelli
Dataset Card Contact
For questions, please contact: [email protected]