Datasets:
rlhn
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 4,753 Bytes
5c93cc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d5323fa
 
 
 
 
 
 
 
5c93cc1
d5323fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
dataset_info:
  features:
  - name: query_id
    dtype: string
  - name: query
    dtype: string
  - name: positive_passages
    list:
    - name: docid
      dtype: string
    - name: text
      dtype: string
    - name: title
      dtype: string
  - name: negative_passages
    list:
    - name: docid
      dtype: string
    - name: text
      dtype: string
    - name: title
      dtype: string
  - name: subset
    dtype: string
  splits:
  - name: train
    num_bytes: 10961970907
    num_examples: 648766
  download_size: 6447294919
  dataset_size: 10961970907
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
pretty_name: RLHN-680K
size_categories:
- 100K<n<1M
---

# Dataset Card for RLHN-680K

## Dataset Description
[Repository](https://github.com/castorini/rlhn) |
[Paper](https://huggingface.co/papers/2505.16967) |
[ArXiv](https://arxiv.org/abs/2505.16967)

RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.

This Tevatron dataset (680K training pairs) contains the queries, positives + relabeled hard negatives, remaining hard negatives for 7 datasets in the BGE training collection. 

This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models. 

The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-680K](https://huggingface.co/datasets/rlhn/default-680K/).

> Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!

## Dataset Structure

To access the data using HuggingFace `datasets`:
```python
rlhn = datasets.load_dataset('rlhn/rlhn-680K')

# training set:
for data in freshstack['train']:
    query_id = data["query_id"]                            # md5 hash of the query_id
    query = data["query"]                                  # query text
    subset = data["subset"]                                # training dataset, e.g., fiqa or msmarco_passage

    # positive passages
    for positive_passage in data["positive_passages"]:
        doc_id = positive_passage["docid"]
        title = positive_passage["title"]                  # title is usually empty, added in text
        text = positive_passage["text"]                    # contains both the title & text

    # hard negative passages
    for negative_passage in data["negative_passages"]:
        doc_id = negative_passage["docid"]
        title = negative_passage["title"]                  # title is usually empty, added in text
        text = negative_passage["text"]                    # contains both the title & text
```


## Original Dataset Statistics 
The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.  

| Dataset           | 100K splits | 250K splits | 400K splits | 680K splits  |
|-------------------|-------------|-------------|-------------|------------- |
| arguana           | 4,065       | 4,065       | 4,065       | 4,065        |
| fever             | 28,755      | 28,755      | 28,755      | 28,755       |
| fiqa              | 5,500       | 5,500       | 5,500       | 5,500        |
| hotpotqa          | 10,250      | 30,000      | 84,516      | 84,516       |
| msmarco_passage   | 49,571      | 145,000     | 210,000     | 485,823      |
| nq                | 6,110       | 30,000      | 58,568      | 58,568       |
| scidocsrr         | 12,654      | 12,654      | 12,654      | 12,654       |
| **total**         | **96,167**  | **255,974** | **404,058** | **679,881**  |


## License
The RLHN dataset is made available with the CC-BY-SA 4.0 license.

## Hashing & IDs

We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:

```python
import hashlib

def get_md5_hash(text):
  """Calculates the MD5 hash of a given string.
  Args:
    text: The string to hash.
  Returns:
    The MD5 hash of the string as a hexadecimal string.
  """
  text_bytes = text.encode('utf-8')  # Encode the string to bytes
  md5_hash = hashlib.md5(text_bytes).hexdigest()
  return md5_hash
```

## Citation
```
@misc{thakur2025relabel,
      title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval}, 
      author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
      year={2025},
      eprint={2505.16967},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2505.16967}, 
}
```