Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
updated README
Browse files
README.md
CHANGED
@@ -34,4 +34,105 @@ configs:
|
|
34 |
data_files:
|
35 |
- split: train
|
36 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
data_files:
|
35 |
- split: train
|
36 |
path: data/train-*
|
37 |
+
license: cc-by-sa-4.0
|
38 |
+
task_categories:
|
39 |
+
- question-answering
|
40 |
+
language:
|
41 |
+
- en
|
42 |
+
pretty_name: RLHN-100K
|
43 |
+
size_categories:
|
44 |
+
- 10K<n<100K
|
45 |
---
|
46 |
+
|
47 |
+
# Dataset Card for RLHN-100K
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
[Repository](https://github.com/castorini/rlhn) |
|
51 |
+
[Paper](https://huggingface.co/papers/2505.16967) |
|
52 |
+
[ArXiv](https://arxiv.org/abs/2505.16967)
|
53 |
+
|
54 |
+
RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.
|
55 |
+
|
56 |
+
This Tevatron dataset (100K training pairs) contains the queries, positives + relabeled hard negatives, remaining hard negatives for 7 datasets in the BGE training collection.
|
57 |
+
|
58 |
+
This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.
|
59 |
+
|
60 |
+
The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-100K](https://huggingface.co/datasets/rlhn/default-100K/).
|
61 |
+
|
62 |
+
> Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!
|
63 |
+
|
64 |
+
## Dataset Structure
|
65 |
+
|
66 |
+
To access the data using HuggingFace `datasets`:
|
67 |
+
```python
|
68 |
+
rlhn = datasets.load_dataset('rlhn/rlhn-100K')
|
69 |
+
|
70 |
+
# training set:
|
71 |
+
for data in freshstack['train']:
|
72 |
+
query_id = data["query_id"] # md5 hash of the query_id
|
73 |
+
query = data["query"] # query text
|
74 |
+
subset = data["subset"] # training dataset, e.g., fiqa or msmarco_passage
|
75 |
+
|
76 |
+
# positive passages
|
77 |
+
for positive_passage in data["positive_passages"]:
|
78 |
+
doc_id = positive_passage["docid"]
|
79 |
+
title = positive_passage["title"] # title is usually empty, added in text
|
80 |
+
text = positive_passage["text"] # contains both the title & text
|
81 |
+
|
82 |
+
# hard negative passages
|
83 |
+
for negative_passage in data["negative_passages"]:
|
84 |
+
doc_id = negative_passage["docid"]
|
85 |
+
title = negative_passage["title"] # title is usually empty, added in text
|
86 |
+
text = negative_passage["text"] # contains both the title & text
|
87 |
+
```
|
88 |
+
|
89 |
+
|
90 |
+
## Original Dataset Statistics
|
91 |
+
The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.
|
92 |
+
|
93 |
+
| Dataset | 100K splits | 250K splits | 400K splits | 680K splits |
|
94 |
+
|-------------------|-------------|-------------|-------------|------------- |
|
95 |
+
| arguana | 4,065 | 4,065 | 4,065 | 4,065 |
|
96 |
+
| fever | 28,755 | 28,755 | 28,755 | 28,755 |
|
97 |
+
| fiqa | 5,500 | 5,500 | 5,500 | 5,500 |
|
98 |
+
| hotpotqa | 10,250 | 30,000 | 84,516 | 84,516 |
|
99 |
+
| msmarco_passage | 49,571 | 145,000 | 210,000 | 485,823 |
|
100 |
+
| nq | 6,110 | 30,000 | 58,568 | 58,568 |
|
101 |
+
| scidocsrr | 12,654 | 12,654 | 12,654 | 12,654 |
|
102 |
+
| **total** | **96,167** | **255,974** | **404,058** | **679,881** |
|
103 |
+
|
104 |
+
|
105 |
+
## License
|
106 |
+
The RLHN dataset is made available with the CC-BY-SA 4.0 license.
|
107 |
+
|
108 |
+
## Hashing & IDs
|
109 |
+
|
110 |
+
We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:
|
111 |
+
|
112 |
+
```python
|
113 |
+
import hashlib
|
114 |
+
|
115 |
+
def get_md5_hash(text):
|
116 |
+
"""Calculates the MD5 hash of a given string.
|
117 |
+
Args:
|
118 |
+
text: The string to hash.
|
119 |
+
Returns:
|
120 |
+
The MD5 hash of the string as a hexadecimal string.
|
121 |
+
"""
|
122 |
+
text_bytes = text.encode('utf-8') # Encode the string to bytes
|
123 |
+
md5_hash = hashlib.md5(text_bytes).hexdigest()
|
124 |
+
return md5_hash
|
125 |
+
```
|
126 |
+
|
127 |
+
## Citation
|
128 |
+
```
|
129 |
+
@misc{thakur2025relabel,
|
130 |
+
title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
|
131 |
+
author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
|
132 |
+
year={2025},
|
133 |
+
eprint={2505.16967},
|
134 |
+
archivePrefix={arXiv},
|
135 |
+
primaryClass={cs.IR},
|
136 |
+
url={https://arxiv.org/abs/2505.16967},
|
137 |
+
}
|
138 |
+
```
|