Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
curiousT commited on
Commit
cad63dd
·
verified ·
1 Parent(s): 598aace

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +178 -51
README.md CHANGED
@@ -1,51 +1,178 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- configs:
4
- - config_name: preview
5
- data_files:
6
- - split: test
7
- path: preview/test-*
8
- - config_name: v1-full
9
- data_files:
10
- - split: test
11
- path: v1-full/test-*
12
- dataset_info:
13
- - config_name: preview
14
- features:
15
- - name: category
16
- dtype: string
17
- - name: key
18
- dtype: int64
19
- - name: question
20
- dtype: string
21
- - name: ground_truths
22
- dtype: string
23
- - name: misc
24
- dtype: string
25
- splits:
26
- - name: test
27
- num_bytes: 17366
28
- num_examples: 10
29
- download_size: 19783
30
- dataset_size: 17366
31
- - config_name: v1-full
32
- features:
33
- - name: category
34
- dtype: string
35
- - name: key
36
- dtype: int64
37
- - name: question
38
- dtype: string
39
- - name: ground_truths
40
- dtype: string
41
- - name: misc
42
- dtype: string
43
- - name: canary
44
- dtype: string
45
- splits:
46
- - name: test
47
- num_bytes: 206083
48
- num_examples: 100
49
- download_size: 129268
50
- dataset_size: 206083
51
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cdla-permissive-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - deepresearch
9
+ size_categories:
10
+ - n<1K
11
+ configs:
12
+ - config_name: preview
13
+ data_files:
14
+ - split: test
15
+ path: preview/test-*
16
+ - config_name: v1-full
17
+ data_files:
18
+ - split: test
19
+ path: v1-full/test-*
20
+ dataset_info:
21
+ - config_name: preview
22
+ features:
23
+ - name: category
24
+ dtype: string
25
+ - name: key
26
+ dtype: int64
27
+ - name: question
28
+ dtype: string
29
+ - name: ground_truths
30
+ dtype: string
31
+ - name: misc
32
+ dtype: string
33
+ splits:
34
+ - name: test
35
+ num_bytes: 17366
36
+ num_examples: 10
37
+ download_size: 19783
38
+ dataset_size: 17366
39
+ - config_name: v1-full
40
+ features:
41
+ - name: category
42
+ dtype: string
43
+ - name: key
44
+ dtype: int64
45
+ - name: question
46
+ dtype: string
47
+ - name: ground_truths
48
+ dtype: string
49
+ - name: misc
50
+ dtype: string
51
+ - name: canary
52
+ dtype: string
53
+ splits:
54
+ - name: test
55
+ num_bytes: 206083
56
+ num_examples: 100
57
+ download_size: 129268
58
+ dataset_size: 206083
59
+ ---
60
+ # Dataset Card for LiveDRBench: Deep Research as Claim Discovery
61
+
62
+ [Arxiv Paper](https://arxiv.org/abs/2508.04183) | [Hugging Face Dataset](https://huggingface.co/datasets/microsoft/LiveDRBench) | [Evaluation Code](https://github.com/microsoft/LiveDRBench)
63
+
64
+ We propose a formal characterization of the deep research (DR) problem and introduce a new benchmark, _LiveDRBench_, to evaluate the performance of DR systems. To enable objective evaluation, we define DR using an intermediate output representation that encodes key claims uncovered during search—separating the reasoning challenge from surface-level report generation.
65
+
66
+ ## Dataset Details
67
+
68
+ The benchmark consists of 100 challenging DR tasks over scientific topics (e.g., dataset discovery, materials discovery, novelty search, prior art discovery) and public interest events (e.g, the Oscars). The data was collected between May-June 2025. We plan to keep the benchmark live, and release periodic updates with new tasks.
69
+
70
+ Each task consists of (a) a prompt with a short description of the task and the expected output format; and (b) ground-truth JSON containing the claims and references that should be uncovered. We also include an evaluation script for computing the performance of DR systems using information-retrieval metrics namely precision, recall, and F1 scores.
71
+
72
+ The benchmark contains eight categories: SciFacts-Geo, SciFacts-Materials, NovelDatasets identification, NovelDatasets identification and extraction, NovelDatasets peer retrieval, PriorArt search, Entities, and Flight incidents.
73
+ The evaluation code for the benchmark can be obtained at [Github](https://github.com/microsoft/livedrbench).
74
+
75
+ A detailed discussion of LiveDRBench, including how it was developed and tested, can be found in our [Arxiv paper](https://arxiv.org/abs/2508.04183).
76
+
77
+ ## Usage
78
+
79
+ To use LiveDRBench's questions, you can load the benchmark using the Hugging Face `datasets` library:
80
+
81
+ ```python
82
+ from datasets import load_dataset
83
+
84
+ livedrbench = load_dataset("microsoft/LiveDRBench", "v1-full")['test']
85
+ ```
86
+
87
+ To evaluate predictions on LiveDRBench, provide a predictions file with the following JSON schema:
88
+
89
+ ```
90
+ [
91
+ {
92
+ "key": str, // Unique identifier from livedrbench.csv
93
+ "preds": List[List[dict | str] | dict] // Predictions in the format specified by each question in livedrbench.csv
94
+ },
95
+ ...
96
+ ]
97
+ ```
98
+
99
+ Then, run the evaluation script in the GitHub repository. This script will compute **precision**, **recall**, and **F1** scores for each benchmark category.
100
+
101
+ ```bash
102
+ python src/evaluation.py \
103
+ --openai_api_key YOUR_API_KEY \
104
+ --preds_file path/to/your/predictions.json \
105
+ [--openai_model_name gpt-4o] \
106
+ [--num_threads 8] \
107
+ [--debug]
108
+ ```
109
+
110
+ - `--openai_api_key` (required): Your OpenAI API key.
111
+ - `--preds_file` (required): Path to the predictions JSON file.
112
+ - `--openai_model_name` (optional): Model to use as judge (default: gpt-4o).
113
+ - `--num_threads` (optional): Number of parallel threads (default: 8).
114
+ - `--debug` (optional): Enable debug mode, without multithreading.
115
+
116
+ ## Intended Uses
117
+
118
+ LiveDRBench benchmark is intended to be used together with the Github repository. The code and the benchmark are being shared with the research community to facilitate reproduction of our results and foster further research in this area. LiveDRBench is intended to be used by domain experts who are independently capable of evaluating the quality of outputs before acting on them.
119
+
120
+ ## Out-of-scope Uses
121
+
122
+ - LiveDRBench is not well suited for training new Deep Research models. It only provides a test set. To avoid accidental test set leakage, we encrypt the answers in the benchmark, following the procedure of [BrowseComp benchmark's release](https://github.com/openai/simple-evals/blob/main/browsecomp_eval.py).
123
+
124
+ - LiveDRBench dataset is not as representative of all kinds of Deep Research queries, especially those that require assessing the writing quality of long reports.
125
+
126
+ - We do not recommend using LiveDRBench repo or the dataset in commercial or real-world applications without further testing and development. They are being released for research purposes.
127
+
128
+ - LiveDRBench should not be used in highly regulated domains where inaccurate outputs could suggest actions that lead to injury or negatively impact an individual's legal, financial, or life opportunities.
129
+
130
+ ## Data Creation: Problem Inversion
131
+
132
+ Creating LiveDRBench involves a _problem inversion_ process that allows easy updation with new instances, given a set of existing reasoning problems. The first step is to find a
133
+ long-context or document reasoning problem that includes a question based on the document and its ground-truth answer. In the second step, this problem is inverted to create a new question asking for an event or entity consistent with the properties mentioned in an answer. In the third step, the question is refined (e.g., more properties are added) such that it admits a unique answer. Finally, the ground-truth set of reference documents is updated in case there are additional documents that provide the same answer.
134
+
135
+ For example, existing data from the [Curie](https://github.com/google/curie) benchmark consists of scientific papers and questions that could be answered based on each paper. The data was transformed to create questions that need to be answered without access to the paper, and thus involving non-trivial search and reasoning. The final ground-truth answers for each question were verified by MSR researchers.
136
+
137
+ While we aim to cover a broad set of scientific fields and world events in the dataset, the dataset primarily covers the fields of materials science, geospatial analysis, and computer science; and world events including flight incidents, the Oscars and Olympiads. We acknowledge that many scientific fields and geographic areas may not be well covered.
138
+
139
+ **Note**: LiveDRBench does not contain links to external data sources. LiveDRBench includes data from an existing scientific dataset, [Curie](https://github.com/google/curie). All queries are answerable using publicly available information.
140
+
141
+ ## Best Practices
142
+
143
+ Best performance can be achieved by connecting an API key directly to the codebase. LiveDRBench should not be the only measure of understanding the performance of a DR model. Additional methods specific to the model use case should also be used to determine the overall performance of the model.
144
+
145
+ We strongly encourage users to use LLMs that support robust Responsible AI mitigations, such as Azure Open AI (AOAI) services. Such services continually update their safety and RAI mitigations with the latest industry standards for responsible use. For more on AOAI’s best practices when employing foundations models for scripts and applications:
146
+
147
+ - [Blog post on responsible AI features in AOAI that were presented at Ignite 2023](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/announcing-new-ai-safety-amp-responsible-ai-features-in-azure/ba-p/3983686)
148
+
149
+ - [Overview of Responsible AI practices for Azure OpenAI models](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview)
150
+
151
+ - [Azure OpenAI Transparency Note](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/transparency-note)
152
+
153
+ - [OpenAI’s Usage policies](https://openai.com/policies/usage-policies)
154
+
155
+ - [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct)
156
+
157
+ Users are reminded to be mindful of data privacy concerns and are encouraged to review the privacy policies associated with any models and data storage solutions interfacing with LiveDRBench.
158
+
159
+ It is the user’s responsibility to ensure that the use of LiveDRBench repo and dataset complies with relevant data protection regulations and organizational guidelines.
160
+
161
+ ## License
162
+
163
+ Code in this Github repository is licensed under the [MIT License](https://github.com/microsoft/livedrbench/blob/main/LICENSE).
164
+ The LiveDRBench dataset is released under the CDLA v2 license.
165
+
166
+ ## Contact
167
+
168
+ If you have suggestions or questions, please raise an issue on Github or contact us at [email protected].
169
+
170
+ ## Citing LiveDRBench
171
+
172
+ @inproceedings{livedrbench2025,
173
+ title={Characterizing Deep Research: A Benchmark and Formal Definition},
174
+ author={Java, Abhinav and Khandelwal, Ashmit and Midigeshi, Sukruta and Halfaker, Aaron and Deshpande, Amit and Goyal, Navin and Gupta, Ankur and Natarajan, Nagarajan and Sharma, Amit},
175
+ booktitle={arXiv preprint arXiv:2508.04183},
176
+ year={2025}
177
+ }
178
+