ag2435 commited on
Commit
e0fd53a
·
verified ·
1 Parent(s): 57f2820

Add dataset card for PhantomWiki v1 (#1)

Browse files

- Add dataset card for PhantomWiki v1 (d32120d88d6f0d904c2032945c9e41c8ffeb850d)
- Add question-answer config of split depth_20_size_50_seed_1 (8ecb221b3f3390b828e677f172dbd4b8a7be3a1d)
- Add text-corpus config of split depth_20_size_50_seed_1 (30e5be3f78212e4f6feb8153a1de19b20ec07fa4)
- Add database config of split depth_20_size_50_seed_1 (58bc50b267fc6998e4f980120cf8bcc7e52d184f)
- Add question-answer config of split depth_20_size_50_seed_2 (2dae61bb7a1af3b7dd5114762941a7888f561232)
- Add text-corpus config of split depth_20_size_50_seed_2 (0b974e3120f027a55d2e9d4938600c36d5e6127a)
- Add database config of split depth_20_size_50_seed_2 (f059749bc10890af0f72c6e95bb14eda8e637701)
- Add question-answer config of split depth_20_size_50_seed_3 (c3d1c9465a3e4d86ee1d198fd0fa6c21d91b8d49)
- Add text-corpus config of split depth_20_size_50_seed_3 (fce1a2237dda2133bbd4fa20af7bb335fc6eecc7)
- Add database config of split depth_20_size_50_seed_3 (ea53ab5f9dc9887c522eba5d922e0c46d3f066fa)
- Add question-answer config of split depth_20_size_500_seed_1 (78737112c303ab4be91c8ed5bfffd6e2ae7ebd0c)
- Add text-corpus config of split depth_20_size_500_seed_1 (43f7f190274458c26bb6ac2ed05244ec892a2cc9)
- Add database config of split depth_20_size_500_seed_1 (cd9764a7b485facfb7424a4f31390340410100aa)
- Add question-answer config of split depth_20_size_500_seed_2 (3eaf0d44092be828e6b19c4895ce6c96f586d3dc)
- Add text-corpus config of split depth_20_size_500_seed_2 (f8d215700b3f61184a6d814e1e1b9cd710f51886)
- Add database config of split depth_20_size_500_seed_2 (073a92205f4301376714fce7d5821efae1fa8c39)
- Add question-answer config of split depth_20_size_500_seed_3 (36e00894f5589e7854ca82cf66a693b91cb473e9)
- Add text-corpus config of split depth_20_size_500_seed_3 (a8ec5fc53dfc24afd63eeb02a6a7d5d18674ff3b)
- Add database config of split depth_20_size_500_seed_3 (c38fd98ca0322ec1bc8c8b17b2b2ce5c5dd0e227)
- Add question-answer config of split depth_20_size_5000_seed_1 (506e6175238373e762b705eb43d83cd3e3bedfe8)
- Add text-corpus config of split depth_20_size_5000_seed_1 (0421b2cca58fa7a05ec8adf17b76ea67b26e59df)
- Add database config of split depth_20_size_5000_seed_1 (20f7a1a79e618f141922e660b98b3dfc5d8118bc)
- Add question-answer config of split depth_20_size_5000_seed_2 (86470866f0fb4da9a4060cc473452dcd03cd8a6b)
- Add text-corpus config of split depth_20_size_5000_seed_2 (a7f12c6793541f0485bb3a35076b33cebbbbeb12)
- Add database config of split depth_20_size_5000_seed_2 (a588b5c1a249a9613f2e2b533b5357fb39f22d6b)
- Add question-answer config of split depth_20_size_5000_seed_3 (105f61cd7b8813565b97ce0940a3087fde44af98)
- Add text-corpus config of split depth_20_size_5000_seed_3 (08edaad2542efc0c2653d9ee8ac749e14eb8e1a2)
- Add database config of split depth_20_size_5000_seed_3 (45004c2946d8c0b71725eb4e4abb581444a74d56)

Files changed (28) hide show
  1. README.md +342 -0
  2. database/depth_20_size_5000_seed_1-00000-of-00001.parquet +3 -0
  3. database/depth_20_size_5000_seed_2-00000-of-00001.parquet +3 -0
  4. database/depth_20_size_5000_seed_3-00000-of-00001.parquet +3 -0
  5. database/depth_20_size_500_seed_1-00000-of-00001.parquet +3 -0
  6. database/depth_20_size_500_seed_2-00000-of-00001.parquet +3 -0
  7. database/depth_20_size_500_seed_3-00000-of-00001.parquet +3 -0
  8. database/depth_20_size_50_seed_1-00000-of-00001.parquet +3 -0
  9. database/depth_20_size_50_seed_2-00000-of-00001.parquet +3 -0
  10. database/depth_20_size_50_seed_3-00000-of-00001.parquet +3 -0
  11. question-answer/depth_20_size_5000_seed_1-00000-of-00001.parquet +3 -0
  12. question-answer/depth_20_size_5000_seed_2-00000-of-00001.parquet +3 -0
  13. question-answer/depth_20_size_5000_seed_3-00000-of-00001.parquet +3 -0
  14. question-answer/depth_20_size_500_seed_1-00000-of-00001.parquet +3 -0
  15. question-answer/depth_20_size_500_seed_2-00000-of-00001.parquet +3 -0
  16. question-answer/depth_20_size_500_seed_3-00000-of-00001.parquet +3 -0
  17. question-answer/depth_20_size_50_seed_1-00000-of-00001.parquet +3 -0
  18. question-answer/depth_20_size_50_seed_2-00000-of-00001.parquet +3 -0
  19. question-answer/depth_20_size_50_seed_3-00000-of-00001.parquet +3 -0
  20. text-corpus/depth_20_size_5000_seed_1-00000-of-00001.parquet +3 -0
  21. text-corpus/depth_20_size_5000_seed_2-00000-of-00001.parquet +3 -0
  22. text-corpus/depth_20_size_5000_seed_3-00000-of-00001.parquet +3 -0
  23. text-corpus/depth_20_size_500_seed_1-00000-of-00001.parquet +3 -0
  24. text-corpus/depth_20_size_500_seed_2-00000-of-00001.parquet +3 -0
  25. text-corpus/depth_20_size_500_seed_3-00000-of-00001.parquet +3 -0
  26. text-corpus/depth_20_size_50_seed_1-00000-of-00001.parquet +3 -0
  27. text-corpus/depth_20_size_50_seed_2-00000-of-00001.parquet +3 -0
  28. text-corpus/depth_20_size_50_seed_3-00000-of-00001.parquet +3 -0
README.md ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 1M<n<10M
9
+ configs:
10
+ - config_name: database
11
+ data_files:
12
+ - split: depth_20_size_50_seed_1
13
+ path: database/depth_20_size_50_seed_1-*
14
+ - split: depth_20_size_50_seed_2
15
+ path: database/depth_20_size_50_seed_2-*
16
+ - split: depth_20_size_50_seed_3
17
+ path: database/depth_20_size_50_seed_3-*
18
+ - split: depth_20_size_500_seed_1
19
+ path: database/depth_20_size_500_seed_1-*
20
+ - split: depth_20_size_500_seed_2
21
+ path: database/depth_20_size_500_seed_2-*
22
+ - split: depth_20_size_500_seed_3
23
+ path: database/depth_20_size_500_seed_3-*
24
+ - split: depth_20_size_5000_seed_1
25
+ path: database/depth_20_size_5000_seed_1-*
26
+ - split: depth_20_size_5000_seed_2
27
+ path: database/depth_20_size_5000_seed_2-*
28
+ - split: depth_20_size_5000_seed_3
29
+ path: database/depth_20_size_5000_seed_3-*
30
+ - config_name: question-answer
31
+ data_files:
32
+ - split: depth_20_size_50_seed_1
33
+ path: question-answer/depth_20_size_50_seed_1-*
34
+ - split: depth_20_size_50_seed_2
35
+ path: question-answer/depth_20_size_50_seed_2-*
36
+ - split: depth_20_size_50_seed_3
37
+ path: question-answer/depth_20_size_50_seed_3-*
38
+ - split: depth_20_size_500_seed_1
39
+ path: question-answer/depth_20_size_500_seed_1-*
40
+ - split: depth_20_size_500_seed_2
41
+ path: question-answer/depth_20_size_500_seed_2-*
42
+ - split: depth_20_size_500_seed_3
43
+ path: question-answer/depth_20_size_500_seed_3-*
44
+ - split: depth_20_size_5000_seed_1
45
+ path: question-answer/depth_20_size_5000_seed_1-*
46
+ - split: depth_20_size_5000_seed_2
47
+ path: question-answer/depth_20_size_5000_seed_2-*
48
+ - split: depth_20_size_5000_seed_3
49
+ path: question-answer/depth_20_size_5000_seed_3-*
50
+ - config_name: text-corpus
51
+ data_files:
52
+ - split: depth_20_size_50_seed_1
53
+ path: text-corpus/depth_20_size_50_seed_1-*
54
+ - split: depth_20_size_50_seed_2
55
+ path: text-corpus/depth_20_size_50_seed_2-*
56
+ - split: depth_20_size_50_seed_3
57
+ path: text-corpus/depth_20_size_50_seed_3-*
58
+ - split: depth_20_size_500_seed_1
59
+ path: text-corpus/depth_20_size_500_seed_1-*
60
+ - split: depth_20_size_500_seed_2
61
+ path: text-corpus/depth_20_size_500_seed_2-*
62
+ - split: depth_20_size_500_seed_3
63
+ path: text-corpus/depth_20_size_500_seed_3-*
64
+ - split: depth_20_size_5000_seed_1
65
+ path: text-corpus/depth_20_size_5000_seed_1-*
66
+ - split: depth_20_size_5000_seed_2
67
+ path: text-corpus/depth_20_size_5000_seed_2-*
68
+ - split: depth_20_size_5000_seed_3
69
+ path: text-corpus/depth_20_size_5000_seed_3-*
70
+ dataset_info:
71
+ - config_name: database
72
+ features:
73
+ - name: content
74
+ dtype: string
75
+ splits:
76
+ - name: depth_20_size_50_seed_1
77
+ num_bytes: 25163
78
+ num_examples: 1
79
+ - name: depth_20_size_50_seed_2
80
+ num_bytes: 25205
81
+ num_examples: 1
82
+ - name: depth_20_size_50_seed_3
83
+ num_bytes: 25015
84
+ num_examples: 1
85
+ - name: depth_20_size_500_seed_1
86
+ num_bytes: 191003
87
+ num_examples: 1
88
+ - name: depth_20_size_500_seed_2
89
+ num_bytes: 190407
90
+ num_examples: 1
91
+ - name: depth_20_size_500_seed_3
92
+ num_bytes: 189702
93
+ num_examples: 1
94
+ - name: depth_20_size_5000_seed_1
95
+ num_bytes: 1847718
96
+ num_examples: 1
97
+ - name: depth_20_size_5000_seed_2
98
+ num_bytes: 1845391
99
+ num_examples: 1
100
+ - name: depth_20_size_5000_seed_3
101
+ num_bytes: 1846249
102
+ num_examples: 1
103
+ download_size: 1965619
104
+ dataset_size: 6185853
105
+ - config_name: question-answer
106
+ features:
107
+ - name: id
108
+ dtype: string
109
+ - name: question
110
+ dtype: string
111
+ - name: intermediate_answers
112
+ dtype: string
113
+ - name: answer
114
+ sequence: string
115
+ - name: prolog
116
+ struct:
117
+ - name: query
118
+ sequence: string
119
+ - name: answer
120
+ dtype: string
121
+ - name: template
122
+ sequence: string
123
+ - name: type
124
+ dtype: int64
125
+ - name: difficulty
126
+ dtype: int64
127
+ splits:
128
+ - name: depth_20_size_50_seed_1
129
+ num_bytes: 299559
130
+ num_examples: 500
131
+ - name: depth_20_size_50_seed_2
132
+ num_bytes: 303664
133
+ num_examples: 500
134
+ - name: depth_20_size_50_seed_3
135
+ num_bytes: 293959
136
+ num_examples: 500
137
+ - name: depth_20_size_500_seed_1
138
+ num_bytes: 308562
139
+ num_examples: 500
140
+ - name: depth_20_size_500_seed_2
141
+ num_bytes: 322956
142
+ num_examples: 500
143
+ - name: depth_20_size_500_seed_3
144
+ num_bytes: 300467
145
+ num_examples: 500
146
+ - name: depth_20_size_5000_seed_1
147
+ num_bytes: 338703
148
+ num_examples: 500
149
+ - name: depth_20_size_5000_seed_2
150
+ num_bytes: 344577
151
+ num_examples: 500
152
+ - name: depth_20_size_5000_seed_3
153
+ num_bytes: 320320
154
+ num_examples: 500
155
+ download_size: 619655
156
+ dataset_size: 2832767
157
+ - config_name: text-corpus
158
+ features:
159
+ - name: title
160
+ dtype: string
161
+ - name: article
162
+ dtype: string
163
+ - name: facts
164
+ sequence: string
165
+ splits:
166
+ - name: depth_20_size_50_seed_1
167
+ num_bytes: 25754
168
+ num_examples: 51
169
+ - name: depth_20_size_50_seed_2
170
+ num_bytes: 26117
171
+ num_examples: 50
172
+ - name: depth_20_size_50_seed_3
173
+ num_bytes: 25637
174
+ num_examples: 51
175
+ - name: depth_20_size_500_seed_1
176
+ num_bytes: 262029
177
+ num_examples: 503
178
+ - name: depth_20_size_500_seed_2
179
+ num_bytes: 260305
180
+ num_examples: 503
181
+ - name: depth_20_size_500_seed_3
182
+ num_bytes: 259662
183
+ num_examples: 504
184
+ - name: depth_20_size_5000_seed_1
185
+ num_bytes: 2614872
186
+ num_examples: 5030
187
+ - name: depth_20_size_5000_seed_2
188
+ num_bytes: 2608826
189
+ num_examples: 5029
190
+ - name: depth_20_size_5000_seed_3
191
+ num_bytes: 2609449
192
+ num_examples: 5039
193
+ download_size: 2851789
194
+ dataset_size: 8692651
195
+ ---
196
+
197
+ # Dataset Card for PhantomWiki
198
+
199
+ **This repository is a collection of PhantomWiki instances generated using the `phantom-wiki` Python package.**
200
+
201
+ PhantomWiki is framework for evaluating LLMs, specifically RAG and agentic workflows, that is resistant to memorization.
202
+ Unlike prior work, it is neither a fixed dataset, nor is it based on any existing data.
203
+ Instead, PhantomWiki generates unique dataset instances, comprised of factually consistent document corpora with diverse question-answer pairs, on demand.
204
+
205
+ ## Dataset Details
206
+
207
+ ### Dataset Description
208
+
209
+ PhantomWiki generates a fictional universe of characters along with a set of facts.
210
+ We reflect these facts in a large-scale corpus, mimicking the style of fan-wiki websites.
211
+ Then we generate question-answer pairs with tunable difficulties, encapsulating the types of multi-hop questions commonly considered in the question-answering (QA) literature.
212
+
213
+ - **Created by:** Albert Gong, Kamilė Stankevičiūtė, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P. Gomes, Kilian Q. Weinberger
214
+ - **Funded by:** AG is funded by the NewYork-Presbyterian Hospital; KS is funded by AstraZeneca; CW is funded by NSF OAC-2118310; AK is partially funded by the National Science Foundation (NSF), the National Institute of Food and Agriculture (USDA/NIFA), the Air Force Office of Scientific Research (AFOSR), and a Schmidt AI2050 Senior Fellowship, a Schmidt Sciences program.
215
+ - **Shared by \[optional\]:** \[More Information Needed\]
216
+ - **Language(s) (NLP):** English
217
+ - **License:** MIT License
218
+
219
+ ### Dataset Sources \[optional\]
220
+
221
+ <!-- Provide the basic links for the dataset. -->
222
+
223
+ - **Repository:** https://github.com/kilian-group/phantom-wiki
224
+ - **Paper:** [PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation](https://huggingface.co/papers/2502.20377)
225
+ - **Demo:** https://github.com/kilian-group/phantom-wiki/blob/main/demo.ipynb
226
+
227
+ ## Uses
228
+
229
+ **We encourage users to generate a new (unique) PhantomWiki instance to combat data leakage and overfitting.**
230
+ PhantomWiki enables quantitative evaluation of the reasoning and retrieval capabilities of LLMs. See our full paper for analysis of frontier LLMs, including GPT-4o, Gemini-1.5-Flash, [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
231
+
232
+ ### Direct Use
233
+
234
+ PhantomWiki is intended to evaluate retrieval augmented generation (RAG) systems and agentic workflows.
235
+
236
+ ### Out-of-Scope Use
237
+
238
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
239
+
240
+ ## Dataset Structure
241
+
242
+ PhantomWiki exposes three components, reflected in the three **configurations**:
243
+
244
+ 1. `question-answer`: Question-answer pairs generated using a context-free grammar
245
+ 2. `text-corpus`: Documents generated using natural-language templates
246
+ 3. `database`: Prolog database containing the facts and clauses representing the universe
247
+
248
+ Each universe is saved as a **split**.
249
+
250
+ ## Dataset Creation
251
+
252
+ ### Curation Rationale
253
+
254
+ Most mathematical and logical reasoning datasets do not explicity evaluate retrieval capabilities and
255
+ few retrieval datasets incorporate complex reasoning, save for a few exceptions (e.g., [BRIGHT](https://huggingface.co/datasets/xlangai/BRIGHT), [MultiHop-RAG](https://huggingface.co/datasets/yixuantt/MultiHopRAG)).
256
+ However, virtually all retrieval datasets are derived from Wikipedia or internet articles, which are contained in LLM training data.
257
+ We take the first steps toward a large-scale synthetic dataset that can evaluate LLMs' reasoning and retrieval capabilities.
258
+
259
+ ### Source Data
260
+
261
+ This is a synthetic dataset. The extent to which we use real data is detailed as follows:
262
+
263
+ 1. We sample surnames from among the most common surnames in the US population according to https://names.mongabay.com/most_common_surnames.htm
264
+ 2. We sample first names using the `names` Python package (https://github.com/treyhunner/names). We thank the contributors for making this tool publicly available.
265
+ 3. We sample jobs from the list of real-life jobs from the `faker` Python package. We thank the contributors for making this tool publicly available.
266
+ 4. We sample hobbies from the list of real-life hobbies at https://www.kaggle.com/datasets/mrhell/list-of-hobbies. We are grateful to the original author for curating this list and making it publicly available.
267
+
268
+ #### Data Collection and Processing
269
+
270
+ This dataset was generated on commodity CPUs using Python and SWI-Prolog. See paper for full details of the generation pipeline, including timings.
271
+
272
+ #### Who are the source data producers?
273
+
274
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
275
+
276
+ N/A
277
+
278
+ ### Annotations \[optional\]
279
+
280
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
281
+
282
+ N/A
283
+
284
+ #### Annotation process
285
+
286
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
287
+
288
+ N/A
289
+
290
+ #### Who are the annotators?
291
+
292
+ <!-- This section describes the people or systems who created the annotations. -->
293
+
294
+ N/A
295
+
296
+ #### Personal and Sensitive Information
297
+
298
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
299
+
300
+ PhantomWiki does not reference any personal or private data.
301
+
302
+ ## Bias, Risks, and Limitations
303
+
304
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
305
+
306
+ PhantomWiki generates large-scale corpora, reflecting fictional universes of characters and mimicking the style of fan-wiki websites. While sufficient for evaluating complex reasoning and retrieval capabilities of LLMs, PhantomWiki is limited to simplified family relations and attributes. Extending the complexity of PhantomWiki to the full scope of Wikipedia is an exciting future direction.
307
+
308
+ ### Recommendations
309
+
310
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
311
+
312
+ PhantomWiki should be used as a benchmark to inform how LLMs should be used on reasoning- and retrieval-based tasks. For holistic evaluation on diverse tasks, PhantomWiki should be combined with other benchmarks.
313
+
314
+ ## Citation \[optional\]
315
+
316
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
317
+
318
+ **BibTeX:**
319
+
320
+ \[More Information Needed\]
321
+
322
+ **APA:**
323
+
324
+ \[More Information Needed\]
325
+
326
+ ## Glossary \[optional\]
327
+
328
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
329
+
330
+ \[More Information Needed\]
331
+
332
+ ## More Information \[optional\]
333
+
334
+ \[More Information Needed\]
335
+
336
+ ## Dataset Card Authors \[optional\]
337
+
338
+ \[More Information Needed\]
339
+
340
+ ## Dataset Card Contact
341
+
342
database/depth_20_size_5000_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:560a9210c398c7a39f90f4f466c2269232d76f52d5a6c058c163f01ac10686cd
3
+ size 593186
database/depth_20_size_5000_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6382ab71da7982d85129ca926e4f971ccc09c508ef9ef3f333c1deca8655c32
3
+ size 589342
database/depth_20_size_5000_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9279ebb65ef7bc78a41f7176dfab0fe1ff3924160098e682f9530c07f1c545c7
3
+ size 590174
database/depth_20_size_500_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3334aeac738d7bddc055e750fc17bc1b049673f29decde933288b2c07ba3a734
3
+ size 56743
database/depth_20_size_500_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:301aaf6cf1119f28e8b6622d91a6fcc44f24fd494778f65c06899ecad7189b52
3
+ size 56386
database/depth_20_size_500_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68a0446aff319c5b0fe53e6d1ae509e5294a06bcb84389d01e44396f19cc3240
3
+ size 55980
database/depth_20_size_50_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ce2c57e4de20b8c40ef274e806cacd7c0f49657432ab7b6dd8ad43a0b9f125a
3
+ size 7984
database/depth_20_size_50_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f0bf596d06ac7d04eb3376ee767ebd66996d81e62efcd60098eff317e84053
3
+ size 7848
database/depth_20_size_50_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d316296dc13d4ddbd0021093466f196e0a203f76f0cc0be729fd43d81afda4
3
+ size 7976
question-answer/depth_20_size_5000_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fe18349e26f5c06ce50c77390342969c290b788f77a1ae93959ae42b35b6f69
3
+ size 86073
question-answer/depth_20_size_5000_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bac202d2a2fed310d2cdae4d75a344011d108dc3f21231310692e864d28a795
3
+ size 88247
question-answer/depth_20_size_5000_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43bb9b2a09a5230abdacbd175ebb7328b0703e9c6d99707ed1494a82a6624581
3
+ size 77966
question-answer/depth_20_size_500_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0db306a2cf29ab1487371c382b3e1a488049af7a7538758d7c00d03b7d8c403
3
+ size 65361
question-answer/depth_20_size_500_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f49d4b853412918b2a3ad267fcb5567edb51f25dac7964be4aebdf3b6c48227
3
+ size 68919
question-answer/depth_20_size_500_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4dc31862659722658116d682e4a791217981f1f5c29cf45ce35c81207bc1821
3
+ size 63322
question-answer/depth_20_size_50_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94f8032794beaa2f26cf377cb5148b63f38f67ed91f38765b9731ed0ce3fe4a6
3
+ size 56658
question-answer/depth_20_size_50_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:337afe86d0b270dd8b6ae6dacd34829bd11637730d50bae2549a752c93a51975
3
+ size 57018
question-answer/depth_20_size_50_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf14b8712c41f2bae21743651941e5a343c340e595024ca79db0455e260428d3
3
+ size 56091
text-corpus/depth_20_size_5000_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:843aa3bc1032db129f6a6ee31f07580b800bd090cde59a096399d582cff255b3
3
+ size 857990
text-corpus/depth_20_size_5000_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d896ca3bbb740f21aa6c9d3ccc82062a7664a0141e96bf749290b1789b26f33
3
+ size 858443
text-corpus/depth_20_size_5000_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37b26d5c06fed0a3dc4b49f38804603ba199a461b6c483516a472189c77861c8
3
+ size 859423
text-corpus/depth_20_size_500_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ded1d7ee7a6c01524f773a7b9ec496ecfecbb6905195eaa4e966c8019858de44
3
+ size 81086
text-corpus/depth_20_size_500_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6a5b54489e03b7a05410c9240a3ca372230d016c7f7a5a2ac80ed9d11512a0e
3
+ size 80635
text-corpus/depth_20_size_500_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15866728c168de93536e171ca2ad40160ac0b6059dce1300f84b430304ab4abf
3
+ size 81581
text-corpus/depth_20_size_50_seed_1-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cae2ff21054ee7e92fe8ea34e9cd0aeb79d2c4320b0e99b4721782f4316834c4
3
+ size 10988
text-corpus/depth_20_size_50_seed_2-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b8adfb5e57412f7e3989103349ecdf7c73b421a93e369215c8f4467c2468352
3
+ size 10708
text-corpus/depth_20_size_50_seed_3-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da3589a73d4f42640d2758e33d48c036a6664d16dcbc805636e3bf51fef66b57
3
+ size 10935