Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
78745ac
·
verified ·
1 Parent(s): c5a29a1

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +257 -64
README.md CHANGED
@@ -1,74 +1,267 @@
1
  ---
 
 
2
  language:
3
- - en
4
- multilinguality:
5
- - monolingual
6
  task_categories:
7
  - text-retrieval
8
- source_datasets:
9
- - msmarco
10
  task_ids:
11
- - document-retrieval
12
  config_names:
13
  - corpus
14
  tags:
15
- - text-retrieval
 
16
  dataset_info:
17
- - config_name: default
18
- features:
19
- - name: query-id
20
- dtype: string
21
- - name: corpus-id
22
- dtype: string
23
- - name: score
24
- dtype: float64
25
- splits:
26
- - name: train
27
- num_bytes: 15384091
28
- num_examples: 532751
29
- - name: dev
30
- num_bytes: 217670
31
- num_examples: 7437
32
- - name: test
33
- num_bytes: 270432
34
- num_examples: 9260
35
- - config_name: corpus
36
- features:
37
- - name: _id
38
- dtype: string
39
- - name: title
40
- dtype: string
41
- - name: text
42
- dtype: string
43
- splits:
44
- - name: corpus
45
- num_bytes: 3149969815
46
- num_examples: 8841823
47
- - config_name: queries
48
- features:
49
- - name: _id
50
- dtype: string
51
- - name: text
52
- dtype: string
53
- splits:
54
- - name: queries
55
- num_bytes: 24100662
56
- num_examples: 509962
57
  configs:
58
- - config_name: default
59
- data_files:
60
- - split: train
61
- path: qrels/train.jsonl
62
- - split: dev
63
- path: qrels/dev.jsonl
64
- - split: test
65
- path: qrels/test.jsonl
66
- - config_name: corpus
67
- data_files:
68
- - split: corpus
69
- path: corpus.jsonl
70
- - config_name: queries
71
- data_files:
72
- - split: queries
73
- path: queries.jsonl
74
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - derived
4
  language:
5
+ - eng
6
+ license: other
7
+ multilinguality: monolingual
8
  task_categories:
9
  - text-retrieval
 
 
10
  task_ids:
11
+ - Question answering
12
  config_names:
13
  - corpus
14
  tags:
15
+ - mteb
16
+ - text
17
  dataset_info:
18
+ - config_name: default
19
+ features:
20
+ - name: query-id
21
+ dtype: string
22
+ - name: corpus-id
23
+ dtype: string
24
+ - name: score
25
+ dtype: float64
26
+ splits:
27
+ - name: train
28
+ num_bytes: 15384091
29
+ num_examples: 532751
30
+ - name: dev
31
+ num_bytes: 217670
32
+ num_examples: 7437
33
+ - name: test
34
+ num_bytes: 270432
35
+ num_examples: 9260
36
+ - config_name: corpus
37
+ features:
38
+ - name: _id
39
+ dtype: string
40
+ - name: title
41
+ dtype: string
42
+ - name: text
43
+ dtype: string
44
+ splits:
45
+ - name: corpus
46
+ num_bytes: 3149969815
47
+ num_examples: 8841823
48
+ - config_name: queries
49
+ features:
50
+ - name: _id
51
+ dtype: string
52
+ - name: text
53
+ dtype: string
54
+ splits:
55
+ - name: queries
56
+ num_bytes: 24100662
57
+ num_examples: 509962
58
  configs:
59
+ - config_name: default
60
+ data_files:
61
+ - split: train
62
+ path: qrels/train.jsonl
63
+ - split: dev
64
+ path: qrels/dev.jsonl
65
+ - split: test
66
+ path: qrels/test.jsonl
67
+ - config_name: corpus
68
+ data_files:
69
+ - split: corpus
70
+ path: corpus.jsonl
71
+ - config_name: queries
72
+ data_files:
73
+ - split: queries
74
+ path: queries.jsonl
75
+ ---
76
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
77
+
78
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
79
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MSMARCO</h1>
80
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
81
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
82
+ </div>
83
+
84
+ MS MARCO is a collection of datasets focused on deep learning in search
85
+
86
+ | | |
87
+ |---------------|---------------------------------------------|
88
+ | Task category | t2t |
89
+ | Domains | Encyclopaedic, Academic, Blog, News, Medical, Government, Reviews, Non-fiction, Social, Web |
90
+ | Reference | https://microsoft.github.io/msmarco/ |
91
+
92
+
93
+ ## How to evaluate on this task
94
+
95
+ You can evaluate an embedding model on this dataset using the following code:
96
+
97
+ ```python
98
+ import mteb
99
+
100
+ task = mteb.get_tasks(["MSMARCO"])
101
+ evaluator = mteb.MTEB(task)
102
+
103
+ model = mteb.get_model(YOUR_MODEL)
104
+ evaluator.run(model)
105
+ ```
106
+
107
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
108
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
109
+
110
+ ## Citation
111
+
112
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
113
+
114
+ ```bibtex
115
+
116
+ @article{DBLP:journals/corr/NguyenRSGTMD16,
117
+ archiveprefix = {arXiv},
118
+ author = {Tri Nguyen and
119
+ Mir Rosenberg and
120
+ Xia Song and
121
+ Jianfeng Gao and
122
+ Saurabh Tiwary and
123
+ Rangan Majumder and
124
+ Li Deng},
125
+ bibsource = {dblp computer science bibliography, https://dblp.org},
126
+ biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
127
+ eprint = {1611.09268},
128
+ journal = {CoRR},
129
+ timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
130
+ title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
131
+ url = {http://arxiv.org/abs/1611.09268},
132
+ volume = {abs/1611.09268},
133
+ year = {2016},
134
+ }
135
+
136
+
137
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
138
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
139
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
140
+ publisher = {arXiv},
141
+ journal={arXiv preprint arXiv:2502.13595},
142
+ year={2025},
143
+ url={https://arxiv.org/abs/2502.13595},
144
+ doi = {10.48550/arXiv.2502.13595},
145
+ }
146
+
147
+ @article{muennighoff2022mteb,
148
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
149
+ title = {MTEB: Massive Text Embedding Benchmark},
150
+ publisher = {arXiv},
151
+ journal={arXiv preprint arXiv:2210.07316},
152
+ year = {2022}
153
+ url = {https://arxiv.org/abs/2210.07316},
154
+ doi = {10.48550/ARXIV.2210.07316},
155
+ }
156
+ ```
157
+
158
+ # Dataset Statistics
159
+ <details>
160
+ <summary> Dataset Statistics</summary>
161
+
162
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
163
+
164
+ ```python
165
+ import mteb
166
+
167
+ task = mteb.get_task("MSMARCO")
168
+
169
+ desc_stats = task.metadata.descriptive_stats
170
+ ```
171
+
172
+ ```json
173
+ {
174
+ "train": {
175
+ "num_samples": 9344762,
176
+ "number_of_characters": 2994608051,
177
+ "num_documents": 8841823,
178
+ "min_document_length": 4,
179
+ "average_document_length": 336.79716603691344,
180
+ "max_document_length": 1670,
181
+ "unique_documents": 8841823,
182
+ "num_queries": 502939,
183
+ "min_query_length": 5,
184
+ "average_query_length": 33.21898281898998,
185
+ "max_query_length": 215,
186
+ "unique_queries": 502939,
187
+ "none_queries": 0,
188
+ "num_relevant_docs": 532751,
189
+ "min_relevant_docs_per_query": 1,
190
+ "average_relevant_docs_per_query": 1.0592755781516248,
191
+ "max_relevant_docs_per_query": 7,
192
+ "unique_relevant_docs": 516472,
193
+ "num_instructions": null,
194
+ "min_instruction_length": null,
195
+ "average_instruction_length": null,
196
+ "max_instruction_length": null,
197
+ "unique_instructions": null,
198
+ "num_top_ranked": null,
199
+ "min_top_ranked_per_query": null,
200
+ "average_top_ranked_per_query": null,
201
+ "max_top_ranked_per_query": null
202
+ },
203
+ "dev": {
204
+ "num_samples": 8848803,
205
+ "number_of_characters": 2978133099,
206
+ "num_documents": 8841823,
207
+ "min_document_length": 4,
208
+ "average_document_length": 336.79716603691344,
209
+ "max_document_length": 1670,
210
+ "unique_documents": 8841823,
211
+ "num_queries": 6980,
212
+ "min_query_length": 9,
213
+ "average_query_length": 33.2621776504298,
214
+ "max_query_length": 186,
215
+ "unique_queries": 6980,
216
+ "none_queries": 0,
217
+ "num_relevant_docs": 7437,
218
+ "min_relevant_docs_per_query": 1,
219
+ "average_relevant_docs_per_query": 1.0654727793696275,
220
+ "max_relevant_docs_per_query": 4,
221
+ "unique_relevant_docs": 7433,
222
+ "num_instructions": null,
223
+ "min_instruction_length": null,
224
+ "average_instruction_length": null,
225
+ "max_instruction_length": null,
226
+ "unique_instructions": null,
227
+ "num_top_ranked": null,
228
+ "min_top_ranked_per_query": null,
229
+ "average_top_ranked_per_query": null,
230
+ "max_top_ranked_per_query": null
231
+ },
232
+ "test": {
233
+ "num_samples": 8841866,
234
+ "number_of_characters": 2977902337,
235
+ "num_documents": 8841823,
236
+ "min_document_length": 4,
237
+ "average_document_length": 336.79716603691344,
238
+ "max_document_length": 1670,
239
+ "unique_documents": 8841823,
240
+ "num_queries": 43,
241
+ "min_query_length": 16,
242
+ "average_query_length": 32.74418604651163,
243
+ "max_query_length": 55,
244
+ "unique_queries": 43,
245
+ "none_queries": 0,
246
+ "num_relevant_docs": 9260,
247
+ "min_relevant_docs_per_query": 132,
248
+ "average_relevant_docs_per_query": 95.3953488372093,
249
+ "max_relevant_docs_per_query": 582,
250
+ "unique_relevant_docs": 9139,
251
+ "num_instructions": null,
252
+ "min_instruction_length": null,
253
+ "average_instruction_length": null,
254
+ "max_instruction_length": null,
255
+ "unique_instructions": null,
256
+ "num_top_ranked": null,
257
+ "min_top_ranked_per_query": null,
258
+ "average_top_ranked_per_query": null,
259
+ "max_top_ranked_per_query": null
260
+ }
261
+ }
262
+ ```
263
+
264
+ </details>
265
+
266
+ ---
267
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*