nthakur commited on
Commit
7c95c3e
·
1 Parent(s): 5911804

updated README and bugfixed datasets issue

Browse files
Files changed (2) hide show
  1. README.md +57 -13
  2. nomiracl.py +26 -9
README.md CHANGED
@@ -31,13 +31,33 @@ license:
31
  - apache-2.0
32
  ---
33
 
34
- # Dataset Card for NoMIRACL
 
35
 
36
- Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
 
37
 
38
- NoMIRACL includes both a `non-relevant` and a `relevant` subset. The `non-relevant` subset contains queries with all passages manually judged as non-relevant or noisy, while the `relevant` subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- All the topics are generated by native speakers of each language from our work in [MIRACL](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering), who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the `non-relevant` subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create `relevant` subset.
 
 
41
 
42
  This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
43
 
@@ -56,8 +76,9 @@ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{s
56
 
57
 
58
  ## Dataset Description
 
 
59
  * **Repository:** https://github.com/project-miracl/nomiracl
60
- * **Paper:** https://arxiv.org/abs/2312.11361
61
 
62
  ## Dataset Structure
63
  1. To download the files:
@@ -93,7 +114,7 @@ split = 'test' # or 'dev' for development split
93
  # four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
94
  nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
95
 
96
- # training set:
97
  for data in nomiracl: # or 'dev', 'testA'
98
  query_id = data['query_id']
99
  query = data['query']
@@ -107,15 +128,38 @@ for data in nomiracl: # or 'dev', 'testA'
107
  ```
108
 
109
  ## Dataset Statistics
110
- For NoMIRACL dataset statistics, please refer to our publication [here](https://arxiv.org/abs/2312.11361).
 
 
111
 
112
 
113
  ## Citation Information
 
 
114
  ```
115
- @article{thakur2023nomiracl,
116
- title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
117
- author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
118
- journal={ArXiv},
119
- year={2023},
120
- volume={abs/2312.11361}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  ```
 
31
  - apache-2.0
32
  ---
33
 
34
+ # Dataset Card for NoMIRACL (:star: EMNLP 2024 Findings Track)
35
+ <!-- <img src="nomiracl.png" alt="NoMIRACL Hallucination Examination (Generated using miramuse.ai and Adobe photoshop)" width="500" height="400"> -->
36
 
37
+ <!-- ## Quick Overview -->
38
+ This repository contains the topics, qrels and top-k (a maximum of 10) annotated passages. The passage collection can be found be here on HF: [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
39
 
40
+ ```
41
+ import datasets
42
+
43
+ language = 'german' # or any of the 18 languages (mentioned above in `languages`)
44
+ subset = 'relevant' # or 'non_relevant' (two subsets: relevant & non-relevant)
45
+ split = 'test' # or 'dev' for the development split
46
+
47
+ # four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
48
+ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
49
+ ```
50
+
51
+ ## What is NoMIRACL?
52
+ Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of LLM generated responses. However, evaluating query-passage relevance across diverse language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a completely human-annotated dataset designed for evaluating multilingual LLM relevance across 18 diverse languages.
53
+
54
+ NoMIRACL evaluates LLM relevance as a binary classification objective, where it contains two subsets: `non-relevant` and `relevant`. The `non-relevant` subset contains queries with all passages manually judged by an expert assessor as non-relevant, while the `relevant` subset contains queries with at least one judged relevant passage within the labeled passages. LLM relevance is measured using two key metrics:
55
+ - *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
56
+ - *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).
57
 
58
+ ## Acknowledgement
59
+
60
+ This dataset would not have been possible without all the topics are generated by native speakers of each language in conjuction from our **multilingual RAG universe** work in part 1, **MIRACL** [[TACL '23]](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering). The queries with all non-relevant passages are used to create the `non-relevant` subset whereas queries with at least a single relevant passage (i.e., MIRACL dev and test splits) are used to create `relevant` subset.
61
 
62
  This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
63
 
 
76
 
77
 
78
  ## Dataset Description
79
+ * **Website:** https://nomiracl.github.io
80
+ * **Paper:** https://aclanthology.org/2024.findings-emnlp.730/
81
  * **Repository:** https://github.com/project-miracl/nomiracl
 
82
 
83
  ## Dataset Structure
84
  1. To download the files:
 
114
  # four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
115
  nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
116
 
117
+ # Individual entry in `relevant` or `non_relevant` subset
118
  for data in nomiracl: # or 'dev', 'testA'
119
  query_id = data['query_id']
120
  query = data['query']
 
128
  ```
129
 
130
  ## Dataset Statistics
131
+ For NoMIRACL dataset statistics, please refer to our EMNLP 2024 Findings publication.
132
+
133
+ Paper: [https://aclanthology.org/2024.findings-emnlp.730/](https://aclanthology.org/2024.findings-emnlp.730/).
134
 
135
 
136
  ## Citation Information
137
+ This work was conducted as a collaboration between University of Waterloo and Huawei Technologies.
138
+
139
  ```
140
+ @inproceedings{thakur-etal-2024-knowing,
141
+ title = "{``}Knowing When You Don{'}t Know{''}: A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation",
142
+ author = "Thakur, Nandan and
143
+ Bonifacio, Luiz and
144
+ Zhang, Crystina and
145
+ Ogundepo, Odunayo and
146
+ Kamalloo, Ehsan and
147
+ Alfonso-Hermelo, David and
148
+ Li, Xiaoguang and
149
+ Liu, Qun and
150
+ Chen, Boxing and
151
+ Rezagholizadeh, Mehdi and
152
+ Lin, Jimmy",
153
+ editor = "Al-Onaizan, Yaser and
154
+ Bansal, Mohit and
155
+ Chen, Yun-Nung",
156
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
157
+ month = nov,
158
+ year = "2024",
159
+ address = "Miami, Florida, USA",
160
+ publisher = "Association for Computational Linguistics",
161
+ url = "https://aclanthology.org/2024.findings-emnlp.730",
162
+ pages = "12508--12526",
163
+ abstract = "Retrieval-Augmented Generation (RAG) grounds Large Language Model (LLM) output by leveraging external knowledge sources to reduce factual hallucinations. However, prior work lacks a comprehensive evaluation of different language families, making it challenging to evaluate LLM robustness against errors in external retrieved knowledge. To overcome this, we establish **NoMIRACL**, a human-annotated dataset for evaluating LLM robustness in RAG across 18 typologically diverse languages. NoMIRACL includes both a non-relevant and a relevant subset. Queries in the non-relevant subset contain passages judged as non-relevant, whereas queries in the relevant subset include at least a single judged relevant passage. We measure relevance assessment using: (i) *hallucination rate*, measuring model tendency to hallucinate when the answer is not present in passages in the non-relevant subset, and (ii) *error rate*, measuring model inaccuracy to recognize relevant passages in the relevant subset. In our work, we observe that most models struggle to balance the two capacities. Models such as LLAMA-2 and Orca-2 achieve over 88{\%} hallucination rate on the non-relevant subset. Mistral and LLAMA-3 hallucinate less but can achieve up to a 74.9{\%} error rate on the relevant subset. Overall, GPT-4 is observed to provide the best tradeoff on both subsets, highlighting future work necessary to improve LLM robustness. NoMIRACL dataset and evaluation code are available at: https://github.com/project-miracl/nomiracl.",
164
+ }
165
  ```
nomiracl.py CHANGED
@@ -25,12 +25,30 @@ from collections import defaultdict
25
 
26
 
27
  _CITATION = """\
28
- @article{thakur2023nomiracl,
29
- title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
30
- author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
31
- journal={ArXiv},
32
- year={2023},
33
- volume={abs/2312.11361}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  }
35
  """
36
 
@@ -38,7 +56,7 @@ _DESCRIPTION = """\
38
  Data Loader for the NoMIRACL dataset.
39
  """
40
 
41
- _URL = "https://github.com/project-miracl/nomiracl"
42
 
43
  _DL_URL_FORMAT = "data/{name}"
44
 
@@ -140,8 +158,7 @@ class NoMIRACL(datasets.GeneratorBasedBuilder):
140
  }),
141
  supervised_keys=("file", "text"),
142
  homepage=_URL,
143
- citation=_CITATION,
144
- task_templates=None,
145
  )
146
 
147
  def _split_generators(self, dl_manager):
 
25
 
26
 
27
  _CITATION = """\
28
+ @inproceedings{thakur-etal-2024-knowing,
29
+ title = "{``}Knowing When You Don{'}t Know{''}: A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation",
30
+ author = "Thakur, Nandan and
31
+ Bonifacio, Luiz and
32
+ Zhang, Crystina and
33
+ Ogundepo, Odunayo and
34
+ Kamalloo, Ehsan and
35
+ Alfonso-Hermelo, David and
36
+ Li, Xiaoguang and
37
+ Liu, Qun and
38
+ Chen, Boxing and
39
+ Rezagholizadeh, Mehdi and
40
+ Lin, Jimmy",
41
+ editor = "Al-Onaizan, Yaser and
42
+ Bansal, Mohit and
43
+ Chen, Yun-Nung",
44
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
45
+ month = nov,
46
+ year = "2024",
47
+ address = "Miami, Florida, USA",
48
+ publisher = "Association for Computational Linguistics",
49
+ url = "https://aclanthology.org/2024.findings-emnlp.730",
50
+ pages = "12508--12526",
51
+ abstract = "Retrieval-Augmented Generation (RAG) grounds Large Language Model (LLM) output by leveraging external knowledge sources to reduce factual hallucinations. However, prior work lacks a comprehensive evaluation of different language families, making it challenging to evaluate LLM robustness against errors in external retrieved knowledge. To overcome this, we establish **NoMIRACL**, a human-annotated dataset for evaluating LLM robustness in RAG across 18 typologically diverse languages. NoMIRACL includes both a non-relevant and a relevant subset. Queries in the non-relevant subset contain passages judged as non-relevant, whereas queries in the relevant subset include at least a single judged relevant passage. We measure relevance assessment using: (i) *hallucination rate*, measuring model tendency to hallucinate when the answer is not present in passages in the non-relevant subset, and (ii) *error rate*, measuring model inaccuracy to recognize relevant passages in the relevant subset. In our work, we observe that most models struggle to balance the two capacities. Models such as LLAMA-2 and Orca-2 achieve over 88{\%} hallucination rate on the non-relevant subset. Mistral and LLAMA-3 hallucinate less but can achieve up to a 74.9{\%} error rate on the relevant subset. Overall, GPT-4 is observed to provide the best tradeoff on both subsets, highlighting future work necessary to improve LLM robustness. NoMIRACL dataset and evaluation code are available at: https://github.com/project-miracl/nomiracl.",
52
  }
53
  """
54
 
 
56
  Data Loader for the NoMIRACL dataset.
57
  """
58
 
59
+ _URL = "https://nomiracl.github.io"
60
 
61
  _DL_URL_FORMAT = "data/{name}"
62
 
 
158
  }),
159
  supervised_keys=("file", "text"),
160
  homepage=_URL,
161
+ citation=_CITATION
 
162
  )
163
 
164
  def _split_generators(self, dl_manager):