Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
47e98dc
·
verified ·
1 Parent(s): 1f7e6a9

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +432 -136
README.md CHANGED
@@ -1,150 +1,446 @@
1
  ---
 
 
2
  language:
3
- - de
4
- - en
5
- - ja
6
- dataset_info:
7
- - config_name: de
8
- features:
9
- - name: text
10
- dtype: string
11
- - name: label
12
- dtype: int32
13
- - name: label_text
14
- dtype: string
15
- splits:
16
- - name: train
17
- num_bytes: 839355
18
- num_examples: 5600
19
- - name: validation
20
- num_bytes: 72051
21
- num_examples: 466
22
- - name: test
23
- num_bytes: 142977
24
- num_examples: 934
25
- download_size: 610356
26
- dataset_size: 1054383
27
- - config_name: en
28
- features:
29
- - name: text
30
- dtype: string
31
- - name: label
32
- dtype: int32
33
- - name: label_text
34
- dtype: string
35
- splits:
36
- - name: train
37
- num_bytes: 548743
38
- num_examples: 4018
39
- - name: validation
40
- num_bytes: 46405
41
- num_examples: 335
42
- - name: test
43
- num_bytes: 90712
44
- num_examples: 670
45
- download_size: 382768
46
- dataset_size: 685860
47
- - config_name: en-ext
48
- features:
49
- - name: text
50
- dtype: string
51
- - name: label
52
- dtype: int32
53
- - name: label_text
54
- dtype: string
55
- splits:
56
- - name: train
57
- num_bytes: 1053699
58
- num_examples: 8000
59
- - name: validation
60
- num_bytes: 87748
61
- num_examples: 666
62
- - name: test
63
- num_bytes: 174870
64
- num_examples: 1334
65
- download_size: 731478
66
- dataset_size: 1316317
67
- - config_name: ja
68
- features:
69
- - name: text
70
- dtype: string
71
- - name: label
72
- dtype: int32
73
- - name: label_text
74
- dtype: string
75
- splits:
76
- - name: train
77
- num_bytes: 862548
78
- num_examples: 5600
79
- - name: validation
80
- num_bytes: 73019
81
- num_examples: 466
82
- - name: test
83
- num_bytes: 143450
84
- num_examples: 934
85
- download_size: 564439
86
- dataset_size: 1079017
87
- configs:
88
- - config_name: de
89
- data_files:
90
- - split: train
91
- path: de/train-*
92
- - split: validation
93
- path: de/validation-*
94
- - split: test
95
- path: de/test-*
96
- - config_name: en
97
- data_files:
98
- - split: train
99
- path: en/train-*
100
- - split: validation
101
- path: en/validation-*
102
- - split: test
103
- path: en/test-*
104
- default: true
105
- - config_name: en-ext
106
- data_files:
107
- - split: train
108
- path: en-ext/train-*
109
- - split: validation
110
- path: en-ext/validation-*
111
- - split: test
112
- path: en-ext/test-*
113
- - config_name: ja
114
- data_files:
115
- - split: train
116
- path: ja/train-*
117
- - split: validation
118
- path: ja/validation-*
119
- - split: test
120
- path: ja/test-*
121
  ---
 
122
 
123
- # Amazon Multilingual Counterfactual Dataset
 
 
 
 
124
 
125
- The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
126
 
127
- The key features of this dataset are:
 
 
 
 
128
 
129
- * The dataset is multilingual and contains sentences in English, German, and Japanese.
130
- * The labeling was done by professional linguists and high quality was ensured.
131
- * The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
132
 
133
- Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
134
 
 
135
 
136
- GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
 
137
 
138
- ## Usage
 
139
 
140
- You can load each of the languages as follows:
 
 
 
 
 
 
 
 
 
 
 
141
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
  ```
143
- from datasets import get_dataset_config_names
144
-
145
- dataset_id = "SetFit/amazon_counterfactual"
146
- # Returns ['de', 'en', 'en-ext', 'ja']
147
- configs = get_dataset_config_names(dataset_id)
148
- # Load English subset
149
- dset = load_dataset(dataset_id, name="en")
150
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - human-annotated
4
  language:
5
+ - eng
6
+ - eng
7
+ - deu
8
+ - jpn
9
+ license: cc-by-4.0
10
+ multilinguality: multilingual
11
+ task_categories:
12
+ - text-classification
13
+ task_ids:
14
+ - Counterfactual Detection
15
+ tags:
16
+ - mteb
17
+ - text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
20
 
21
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
22
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">AmazonCounterfactualClassification</h1>
23
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
24
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
25
+ </div>
26
 
27
+ A collection of Amazon customer reviews annotated for counterfactual detection pair classification.
28
 
29
+ | | |
30
+ |---------------|---------------------------------------------|
31
+ | Task category | t2c |
32
+ | Domains | Reviews, Written |
33
+ | Reference | https://arxiv.org/abs/2104.06893 |
34
 
 
 
 
35
 
36
+ ## How to evaluate on this task
37
 
38
+ You can evaluate an embedding model on this dataset using the following code:
39
 
40
+ ```python
41
+ import mteb
42
 
43
+ task = mteb.get_tasks(["AmazonCounterfactualClassification"])
44
+ evaluator = mteb.MTEB(task)
45
 
46
+ model = mteb.get_model(YOUR_MODEL)
47
+ evaluator.run(model)
48
+ ```
49
+
50
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
51
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
52
+
53
+ ## Citation
54
+
55
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
56
+
57
+ ```bibtex
58
 
59
+ @inproceedings{oneill-etal-2021-wish,
60
+ abstract = {Counterfactual statements describe events that did not or cannot take place. We consider the problem of counterfactual detection (CFD) in product reviews. For this purpose, we annotate a multilingual CFD dataset from Amazon product reviews covering counterfactual statements written in English, German, and Japanese languages. The dataset is unique as it contains counterfactuals in multiple languages, covers a new application area of e-commerce reviews, and provides high quality professional annotations. We train CFD models using different text representation methods and classifiers. We find that these models are robust against the selectional biases introduced due to cue phrase-based sentence selection. Moreover, our CFD dataset is compatible with prior datasets and can be merged to learn accurate CFD models. Applying machine translation on English counterfactual examples to create multilingual data performs poorly, demonstrating the language-specificity of this problem, which has been ignored so far.},
61
+ address = {Online and Punta Cana, Dominican Republic},
62
+ author = {O{'}Neill, James and
63
+ Rozenshtein, Polina and
64
+ Kiryo, Ryuichi and
65
+ Kubota, Motoko and
66
+ Bollegala, Danushka},
67
+ booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
68
+ doi = {10.18653/v1/2021.emnlp-main.568},
69
+ editor = {Moens, Marie-Francine and
70
+ Huang, Xuanjing and
71
+ Specia, Lucia and
72
+ Yih, Scott Wen-tau},
73
+ month = nov,
74
+ pages = {7092--7108},
75
+ publisher = {Association for Computational Linguistics},
76
+ title = {{I} Wish {I} Would Have Loved This One, But {I} Didn{'}t {--} A Multilingual Dataset for Counterfactual Detection in Product Review},
77
+ url = {https://aclanthology.org/2021.emnlp-main.568},
78
+ year = {2021},
79
+ }
80
+
81
+
82
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
83
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
84
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
85
+ publisher = {arXiv},
86
+ journal={arXiv preprint arXiv:2502.13595},
87
+ year={2025},
88
+ url={https://arxiv.org/abs/2502.13595},
89
+ doi = {10.48550/arXiv.2502.13595},
90
+ }
91
+
92
+ @article{muennighoff2022mteb,
93
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
94
+ title = {MTEB: Massive Text Embedding Benchmark},
95
+ publisher = {arXiv},
96
+ journal={arXiv preprint arXiv:2210.07316},
97
+ year = {2022}
98
+ url = {https://arxiv.org/abs/2210.07316},
99
+ doi = {10.48550/ARXIV.2210.07316},
100
+ }
101
  ```
102
+
103
+ # Dataset Statistics
104
+ <details>
105
+ <summary> Dataset Statistics</summary>
106
+
107
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
108
+
109
+ ```python
110
+ import mteb
111
+
112
+ task = mteb.get_task("AmazonCounterfactualClassification")
113
+
114
+ desc_stats = task.metadata.descriptive_stats
115
+ ```
116
+
117
+ ```json
118
+ {
119
+ "validation": {
120
+ "num_samples": 1933,
121
+ "number_of_characters": 183142,
122
+ "number_texts_intersect_with_train": 552,
123
+ "min_text_length": 9,
124
+ "average_text_length": 94.74495602690119,
125
+ "max_text_length": 525,
126
+ "unique_texts": 1903,
127
+ "min_labels_per_text": 1,
128
+ "average_label_per_text": 1.0,
129
+ "max_labels_per_text": 1,
130
+ "unique_labels": 2,
131
+ "labels": {
132
+ "0": {
133
+ "count": 1437
134
+ },
135
+ "1": {
136
+ "count": 496
137
+ }
138
+ },
139
+ "hf_subset_descriptive_stats": {
140
+ "en-ext": {
141
+ "num_samples": 666,
142
+ "number_of_characters": 68028,
143
+ "number_texts_intersect_with_train": 0,
144
+ "min_text_length": 31,
145
+ "average_text_length": 102.14414414414415,
146
+ "max_text_length": 370,
147
+ "unique_texts": 666,
148
+ "min_labels_per_text": 1,
149
+ "average_label_per_text": 1.0,
150
+ "max_labels_per_text": 1,
151
+ "unique_labels": 2,
152
+ "labels": {
153
+ "0": {
154
+ "count": 599
155
+ },
156
+ "1": {
157
+ "count": 67
158
+ }
159
+ }
160
+ },
161
+ "en": {
162
+ "num_samples": 335,
163
+ "number_of_characters": 36583,
164
+ "number_texts_intersect_with_train": 0,
165
+ "min_text_length": 36,
166
+ "average_text_length": 109.20298507462687,
167
+ "max_text_length": 470,
168
+ "unique_texts": 335,
169
+ "min_labels_per_text": 1,
170
+ "average_label_per_text": 1.0,
171
+ "max_labels_per_text": 1,
172
+ "unique_labels": 2,
173
+ "labels": {
174
+ "0": {
175
+ "count": 277
176
+ },
177
+ "1": {
178
+ "count": 58
179
+ }
180
+ }
181
+ },
182
+ "de": {
183
+ "num_samples": 466,
184
+ "number_of_characters": 58251,
185
+ "number_texts_intersect_with_train": 3,
186
+ "min_text_length": 22,
187
+ "average_text_length": 125.00214592274678,
188
+ "max_text_length": 525,
189
+ "unique_texts": 466,
190
+ "min_labels_per_text": 1,
191
+ "average_label_per_text": 1.0,
192
+ "max_labels_per_text": 1,
193
+ "unique_labels": 2,
194
+ "labels": {
195
+ "0": {
196
+ "count": 141
197
+ },
198
+ "1": {
199
+ "count": 325
200
+ }
201
+ }
202
+ },
203
+ "ja": {
204
+ "num_samples": 466,
205
+ "number_of_characters": 20280,
206
+ "number_texts_intersect_with_train": 13,
207
+ "min_text_length": 9,
208
+ "average_text_length": 43.51931330472103,
209
+ "max_text_length": 191,
210
+ "unique_texts": 464,
211
+ "min_labels_per_text": 1,
212
+ "average_label_per_text": 1.0,
213
+ "max_labels_per_text": 1,
214
+ "unique_labels": 2,
215
+ "labels": {
216
+ "0": {
217
+ "count": 420
218
+ },
219
+ "1": {
220
+ "count": 46
221
+ }
222
+ }
223
+ }
224
+ }
225
+ },
226
+ "test": {
227
+ "num_samples": 3872,
228
+ "number_of_characters": 361556,
229
+ "number_texts_intersect_with_train": 1128,
230
+ "min_text_length": 6,
231
+ "average_text_length": 93.37706611570248,
232
+ "max_text_length": 568,
233
+ "unique_texts": 3779,
234
+ "min_labels_per_text": 1,
235
+ "average_label_per_text": 1.0,
236
+ "max_labels_per_text": 1,
237
+ "unique_labels": 2,
238
+ "labels": {
239
+ "1": {
240
+ "count": 1016
241
+ },
242
+ "0": {
243
+ "count": 2856
244
+ }
245
+ },
246
+ "hf_subset_descriptive_stats": {
247
+ "en-ext": {
248
+ "num_samples": 1334,
249
+ "number_of_characters": 135364,
250
+ "number_texts_intersect_with_train": 1,
251
+ "min_text_length": 6,
252
+ "average_text_length": 101.47226386806597,
253
+ "max_text_length": 420,
254
+ "unique_texts": 1333,
255
+ "min_labels_per_text": 1,
256
+ "average_label_per_text": 1.0,
257
+ "max_labels_per_text": 1,
258
+ "unique_labels": 2,
259
+ "labels": {
260
+ "1": {
261
+ "count": 139
262
+ },
263
+ "0": {
264
+ "count": 1195
265
+ }
266
+ }
267
+ },
268
+ "en": {
269
+ "num_samples": 670,
270
+ "number_of_characters": 71118,
271
+ "number_texts_intersect_with_train": 0,
272
+ "min_text_length": 32,
273
+ "average_text_length": 106.14626865671642,
274
+ "max_text_length": 541,
275
+ "unique_texts": 670,
276
+ "min_labels_per_text": 1,
277
+ "average_label_per_text": 1.0,
278
+ "max_labels_per_text": 1,
279
+ "unique_labels": 2,
280
+ "labels": {
281
+ "0": {
282
+ "count": 539
283
+ },
284
+ "1": {
285
+ "count": 131
286
+ }
287
+ }
288
+ },
289
+ "de": {
290
+ "num_samples": 934,
291
+ "number_of_characters": 115432,
292
+ "number_texts_intersect_with_train": 3,
293
+ "min_text_length": 23,
294
+ "average_text_length": 123.58886509635974,
295
+ "max_text_length": 568,
296
+ "unique_texts": 933,
297
+ "min_labels_per_text": 1,
298
+ "average_label_per_text": 1.0,
299
+ "max_labels_per_text": 1,
300
+ "unique_labels": 2,
301
+ "labels": {
302
+ "0": {
303
+ "count": 284
304
+ },
305
+ "1": {
306
+ "count": 650
307
+ }
308
+ }
309
+ },
310
+ "ja": {
311
+ "num_samples": 934,
312
+ "number_of_characters": 39642,
313
+ "number_texts_intersect_with_train": 27,
314
+ "min_text_length": 6,
315
+ "average_text_length": 42.44325481798715,
316
+ "max_text_length": 165,
317
+ "unique_texts": 934,
318
+ "min_labels_per_text": 1,
319
+ "average_label_per_text": 1.0,
320
+ "max_labels_per_text": 1,
321
+ "unique_labels": 2,
322
+ "labels": {
323
+ "0": {
324
+ "count": 838
325
+ },
326
+ "1": {
327
+ "count": 96
328
+ }
329
+ }
330
+ }
331
+ }
332
+ },
333
+ "train": {
334
+ "num_samples": 23218,
335
+ "number_of_characters": 2161346,
336
+ "number_texts_intersect_with_train": null,
337
+ "min_text_length": 6,
338
+ "average_text_length": 93.08924110603841,
339
+ "max_text_length": 572,
340
+ "unique_texts": 19945,
341
+ "min_labels_per_text": 1,
342
+ "average_label_per_text": 1.0,
343
+ "max_labels_per_text": 1,
344
+ "unique_labels": 2,
345
+ "labels": {
346
+ "0": {
347
+ "count": 17239
348
+ },
349
+ "1": {
350
+ "count": 5979
351
+ }
352
+ },
353
+ "hf_subset_descriptive_stats": {
354
+ "en-ext": {
355
+ "num_samples": 8000,
356
+ "number_of_characters": 816814,
357
+ "number_texts_intersect_with_train": null,
358
+ "min_text_length": 6,
359
+ "average_text_length": 102.10175,
360
+ "max_text_length": 541,
361
+ "unique_texts": 7998,
362
+ "min_labels_per_text": 1,
363
+ "average_label_per_text": 1.0,
364
+ "max_labels_per_text": 1,
365
+ "unique_labels": 2,
366
+ "labels": {
367
+ "0": {
368
+ "count": 7176
369
+ },
370
+ "1": {
371
+ "count": 824
372
+ }
373
+ }
374
+ },
375
+ "en": {
376
+ "num_samples": 4018,
377
+ "number_of_characters": 431133,
378
+ "number_texts_intersect_with_train": null,
379
+ "min_text_length": 33,
380
+ "average_text_length": 107.30039820806371,
381
+ "max_text_length": 514,
382
+ "unique_texts": 4018,
383
+ "min_labels_per_text": 1,
384
+ "average_label_per_text": 1.0,
385
+ "max_labels_per_text": 1,
386
+ "unique_labels": 2,
387
+ "labels": {
388
+ "1": {
389
+ "count": 765
390
+ },
391
+ "0": {
392
+ "count": 3253
393
+ }
394
+ }
395
+ },
396
+ "de": {
397
+ "num_samples": 5600,
398
+ "number_of_characters": 674491,
399
+ "number_texts_intersect_with_train": null,
400
+ "min_text_length": 19,
401
+ "average_text_length": 120.44482142857143,
402
+ "max_text_length": 572,
403
+ "unique_texts": 5587,
404
+ "min_labels_per_text": 1,
405
+ "average_label_per_text": 1.0,
406
+ "max_labels_per_text": 1,
407
+ "unique_labels": 2,
408
+ "labels": {
409
+ "1": {
410
+ "count": 3865
411
+ },
412
+ "0": {
413
+ "count": 1735
414
+ }
415
+ }
416
+ },
417
+ "ja": {
418
+ "num_samples": 5600,
419
+ "number_of_characters": 238908,
420
+ "number_texts_intersect_with_train": null,
421
+ "min_text_length": 8,
422
+ "average_text_length": 42.662142857142854,
423
+ "max_text_length": 190,
424
+ "unique_texts": 5530,
425
+ "min_labels_per_text": 1,
426
+ "average_label_per_text": 1.0,
427
+ "max_labels_per_text": 1,
428
+ "unique_labels": 2,
429
+ "labels": {
430
+ "0": {
431
+ "count": 5075
432
+ },
433
+ "1": {
434
+ "count": 525
435
+ }
436
+ }
437
+ }
438
+ }
439
+ }
440
+ }
441
+ ```
442
+
443
+ </details>
444
+
445
+ ---
446
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*