Datasets:
language:
- en
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: type
dtype: string
splits:
- name: passage
num_bytes: 598330
num_examples: 7150
- name: document
num_bytes: 485624
num_examples: 6050
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 4781
num_examples: 76
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: headings
dtype: string
- name: text
dtype: string
splits:
- name: pass_core
num_bytes: 2712089
num_examples: 7126
- name: pass_10k
num_bytes: 1065541
num_examples: 2874
- name: pass_100k
num_bytes: 33382351
num_examples: 90000
- name: pass_1M
num_bytes: 333466010
num_examples: 900000
- name: pass_10M
num_bytes: 3332841963
num_examples: 9000000
- name: pass_100M
num_bytes: 33331696935
num_examples: 90000000
- name: doc_core
num_bytes: 91711400
num_examples: 6032
- name: doc_10k
num_bytes: 38457420
num_examples: 3968
- name: doc_100k
num_bytes: 883536440
num_examples: 90000
- name: doc_1M
num_bytes: 8850694962
num_examples: 900000
- name: doc_10M
num_bytes: 88689338934
num_examples: 9000000
configs:
- config_name: qrels
data_files:
- split: passage
path: qrels/passage.jsonl
- split: document
path: qrels/document.jsonl
- config_name: queries
data_files:
- split: test
path: queries.jsonl
- config_name: corpus
data_files:
- split: pass_core
path: passage/corpus_core.jsonl
- split: pass_10k
path: passage/corpus_10000.jsonl
- split: pass_100k
path: passage/corpus_100000.jsonl
- split: pass_1M
path: passage/corpus_1000000.jsonl
- split: pass_10M
path: passage/corpus_10000000_*.jsonl
- split: pass_100M
path: passage/corpus_100000000_*.jsonl
- split: doc_core
path: document/corpus_core.jsonl
- split: doc_10k
path: document/corpus_10000.jsonl
- split: doc_100k
path: document/corpus_100000.jsonl
- split: doc_1M
path: document/corpus_1000000.jsonl
- split: doc_10M
path: document/corpus_10000000_*.jsonl
CoRE: Controlled Retrieval Evaluation Dataset
Motivation | Dataset Overview | Dataset Construction | Dataset Structure | Qrels Format | Evaluation | Citation | Links | Contact
CoRE (Controlled Retrieval Evaluation) is a benchmark dataset designed for the rigorous evaluation of embedding compression techniques in information retrieval.
π Motivation
Embedding compression is essential for scaling modern retrieval systems, but its effects are often evaluated under overly simplistic conditions. CoRE addresses this by offering a collection of corpora with:
- Multiple document lengths (passage and document) and sizes (10k to 100M)
- Fixed number of relevant and distractor documents per query
- Realistic evaluation grounded in TREC DL human relevance labels
This evaluation framework goes beyond, e.g., the benchmark used in the paper "The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes" from Reimers and Gurevych (2021), which disregards different document lengths and employs a less advanced random sampling, hence creating a less realistic experimental setup.
π¦ Dataset Overview
CoRE builds on MS MARCO v2 and introduces high-quality distractors using pooled system runs from TREC 2023 Deep Learning Track. We ensure consistent query difficulty across different corpus sizes and document types. This overcomes the limitations of randomly sampled corpora, which can lead to trivial retrieval tasks, as no distractors are present in smaller datasets.
Document Type | # Queries | Corpus Sizes |
---|---|---|
Passage | 65 | 10k, 100k, 1M, 10M, 100M |
Document | 55 | 10k, 100k, 1M, 10M |
For each query:
- 10 relevant documents
- 100 high-quality distractors, selected via Reciprocal Rank Fusion (RRF) from top TREC system runs (bottom 20% of runs excluded)
π Dataset Construction
To avoid trivializing the retrieval task when reducing corpus size, CoRE follows the intelligent corpus subsampling strategy proposed by FrΓΆbe et al. (2025). This method is used to mine distractors from pooled ranking lists. These distractors are then included in all corpora of CoRE, ensuring a fixed query difficultyβunlike naive random sampling, where the number of distractors would decrease with corpus size.
Steps for both passage and document retrieval:
Start from MS MARCO v2 annotations
For each query:
- Retain 10 relevant documents
- Mine 100 distractors from RRF-fused rankings of top TREC 2023 DL submissions
Construct multiple corpus scales by aggregating relevant documents and distractors with randomly sampled filler documents
π§± Dataset Structure
The dataset consists of three subsets: queries
, qrels
, and corpus
.
- queries: contains only one split (
test
) - qrels: contains two splits:
passage
anddocument
- corpus: contains 11 splits, detailed below:
Split | # Documents |
---|---|
pass_core | ~7,130 |
pass_10k | ~2,870 |
pass_100k | 90,000 |
pass_1M | 900,000 |
pass_10M | 9,000,000 |
pass_100M | 90,000,000 |
Split | # Documents |
---|---|
doc_core | ~6,030 |
doc_10k | ~3,970 |
doc_100k | 90,000 |
doc_1M | 900,000 |
doc_10M | 9,000,000 |
Note: The
_core
splits contain only relevant and distractor documents. All other splits are topped up with randomly sampled documents to reach the target size.
π· Qrels Format
The qrels
files in CoRE differ from typical IR datasets. Instead of the standard relevance grading (e.g., 0, 1, 2), CoRE uses two distinct labels:
relevant
(10 documents per query)distractor
(100 documents per query)
This enables focused evaluation of model sensitivity to compression under tightly controlled relevance and distractor distributions.
π Evaluation
from datasets import load_dataset
# Load queries
queries = load_dataset("<anonymized>/CoRE", name="queries", split="test")
# Load relevance judgments
qrels = load_dataset("<anonymized>/CoRE", name="qrels", split="passage")
# Load a 100k-scale corpus for passage retrieval
corpus = load_dataset("<anonymized>/CoRE", name="corpus", split="pass_100k")
π Citation
If you use CoRE in your research, please cite:
<anonymized>
π Links
π¬ Contact
For questions or collaboration opportunities, contact <anonymized>
at <anonymized>
.