File size: 3,952 Bytes
515a0f9 40d6eab 515a0f9 d496bc3 515a0f9 d496bc3 515a0f9 e93a363 515a0f9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: docs
sequence: string
- name: scores
sequence: float64
splits:
- name: train
num_bytes: 957899062
num_examples: 502939
download_size: 917108315
dataset_size: 957899062
---
# Dataset Card for **MS MARCO Hard Negatives (OpenSearch)**
## Dataset Summary
This dataset is derived from the **MS MARCO** train split([Hugging Face](https://huggingface.co/datasets/mteb/msmarco)) and provides **hard-negative mining** annotations to train retrieval systems. For each query from the source split, we retrieve the **top-100 candidate documents** using the [opensearch-project/opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) and attach **re-ranking scores** from two cross-encoders: [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) and [castorini/monot5-3b-msmarco](https://huggingface.co/castorini/monot5-3b-msmarco).
> ⚠️ **Licensing/Usage:** Because this dataset is derived from MS MARCO, please review Microsoft’s terms before using this dataset. ([Microsoft GitHub](https://microsoft.github.io/msmarco/Datasets.html), [GitHub](https://github.com/microsoft/msmarco))
---
## How to Load
```python
import datasets
ds = datasets.load_dataset("opensearch-project/msmarco-hard-negatives", split="train")
```
---
## Training example
Related training example: **opensearch-sparse-model-tuning-sample**. ([GitHub](https://github.com/zhichao-aws/opensearch-sparse-model-tuning-sample))
To convert the dataset to text-only format for sample repo training:
```python
import datasets
# 1) Load datasets
msmarco_hard_negatives = datasets.load_dataset(
"opensearch-project/msmarco-hard-negatives", split="train"
)
msmarco_queries = datasets.load_dataset("BeIR/msmarco", "queries")["queries"]
msmarco_corpus = datasets.load_dataset("BeIR/msmarco", "corpus")["corpus"]
# 2) fix occasional text encoding issues
def transform_str(s):
try:
s = s.encode("latin1").decode("utf-8")
return s
except Exception:
return s
msmarco_corpus = msmarco_corpus.map(
lambda x: {"text": transform_str(x["text"])}, num_proc=30
)
# 3) Build convenient lookup tables
id_to_text = {_id: text for _id, text in zip(msmarco_corpus["_id"], msmarco_corpus["text"])}
qid_to_text = {_id: text for _id, text in zip(msmarco_queries["_id"], msmarco_queries["text"])}
# 4) Replace IDs with raw texts to get a text-only dataset
msmarco_hard_negatives = msmarco_hard_negatives.map(
lambda x: {
"query": qid_to_text[x["query"]],
"docs": [id_to_text[doc] for doc in x["docs"]],
},
num_proc=30,
)
# 5) Save to disk (directory will contain the text-only view)
msmarco_hard_negatives.save_to_disk("data/msmarco_ft")
```
---
## Citation
If you use this dataset, **please cite**:
[Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers](https://arxiv.org/abs/2411.04403)
```
@misc{geng2024competitivesearchrelevanceinferencefree,
title={Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers},
author={Zhichao Geng and Dongyu Ru and Yang Yang},
year={2024},
eprint={2411.04403},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2411.04403},
}
```
## Related Papers
- [Exploring $\ell_0$ Sparsification for Inference-free Sparse Retrievers ](https://arxiv.org/abs/2504.14839)
## License
This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE).
---
## Copyright
Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details. |