Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringclasses
54 values
corpus-id
stringlengths
3
7
score
float64
0
3
23849
1020327
2
23849
1034183
3
23849
1120730
0
23849
1139571
1
23849
1143724
0
23849
1147202
0
23849
1150311
0
23849
1158886
2
23849
1175024
1
23849
1201385
0
23849
1215556
0
23849
1220759
0
23849
1221770
0
23849
1333480
1
23849
1381453
2
23849
1414114
2
23849
1414115
0
23849
1414120
2
23849
1449780
0
23849
146754
0
23849
1493231
0
23849
1532701
0
23849
1535484
0
23849
1605854
1
23849
1605857
1
23849
1622747
1
23849
17118
0
23849
17122
0
23849
1714915
0
23849
1714917
1
23849
1724687
0
23849
172488
0
23849
178252
0
23849
182049
0
23849
1827512
1
23849
1844627
0
23849
188190
0
23849
188246
1
23849
1944730
0
23849
2003292
0
23849
2017213
0
23849
2203364
0
23849
2209883
0
23849
2318793
0
23849
2339898
1
23849
2373852
0
23849
2397072
0
23849
2423771
0
23849
2516458
0
23849
2585563
0
23849
2593928
0
23849
2607127
3
23849
2607128
1
23849
2607129
2
23849
2607130
2
23849
2607131
2
23849
2607132
3
23849
2607134
0
23849
2647769
3
23849
2674124
0
23849
2766280
0
23849
282421
0
23849
2838462
3
23849
2880479
0
23849
2934343
0
23849
293608
0
23849
293753
0
23849
3008750
1
23849
310272
0
23849
3104533
0
23849
3113002
0
23849
314300
0
23849
3185119
1
23849
3341056
0
23849
3342089
0
23849
3374884
0
23849
3376500
0
23849
3678109
0
23849
3687776
0
23849
3698649
0
23849
3799370
0
23849
3878669
1
23849
3978767
2
23849
3978774
1
23849
4016311
2
23849
4016314
1
23849
4016317
3
23849
4065323
1
23849
4091550
0
23849
4091551
0
23849
4104777
0
23849
4138534
2
23849
4188880
1
23849
4281690
1
23849
4344309
1
23849
4348282
0
23849
436721
0
23849
4387499
0
23849
441893
0
23849
4482145
0
End of preview. Expand in Data Studio

TRECDL2020

An MTEB dataset
Massive Text Embedding Benchmark

TREC Deep Learning Track 2020 passage ranking task. The task involves retrieving relevant passages from the MS MARCO collection given web search queries. Queries have multi-graded relevance judgments.

Task category t2t
Domains Encyclopaedic, Academic, Blog, News, Medical, Government, Reviews, Non-fiction, Social, Web
Reference https://microsoft.github.io/msmarco/TREC-Deep-Learning-2020

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["TRECDL2020"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@article{DBLP:journals/corr/NguyenRSGTMD16,
  archiveprefix = {arXiv},
  author = {Tri Nguyen and
Mir Rosenberg and
Xia Song and
Jianfeng Gao and
Saurabh Tiwary and
Rangan Majumder and
Li Deng},
  bibsource = {dblp computer science bibliography, https://dblp.org},
  biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
  eprint = {1611.09268},
  journal = {CoRR},
  timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
  title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
  url = {http://arxiv.org/abs/1611.09268},
  volume = {abs/1611.09268},
  year = {2016},
}

@inproceedings{craswell2021overview,
  author = {Craswell, Nick and Mitra, Bhaskar and Yilmaz, Emine and Campos, Daniel and Voorhees, Ellen M},
  booktitle = {Proceedings of the 29th Text REtrieval Conference (TREC 2020)},
  organization = {NIST},
  title = {Overview of the TREC 2020 deep learning track},
  year = {2021},
}

@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("TRECDL2020")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 8841877,
        "number_of_characters": 2969060931,
        "documents_text_statistics": {
            "total_text_length": 2969059106,
            "min_text_length": 3,
            "average_text_length": 335.79716603691344,
            "max_text_length": 1669,
            "unique_texts": 8841661
        },
        "documents_image_statistics": null,
        "queries_text_statistics": {
            "total_text_length": 1825,
            "min_text_length": 12,
            "average_text_length": 33.7962962962963,
            "max_text_length": 70,
            "unique_texts": 54
        },
        "queries_image_statistics": null,
        "relevant_docs_statistics": {
            "num_relevant_docs": 3606,
            "min_relevant_docs_per_query": 152,
            "average_relevant_docs_per_query": 66.77777777777777,
            "max_relevant_docs_per_query": 368,
            "unique_relevant_docs": 11224
        },
        "top_ranked_statistics": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
50