phantom-wiki-v1 / README.md
ag2435's picture
Add question-answer config of split depth_20_size_5000_seed_1
506e617 verified
|
raw
history blame
11.5 kB
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 1M<n<10M
configs:
  - config_name: database
    data_files:
      - split: depth_20_size_50_seed_1
        path: database/depth_20_size_50_seed_1-*
      - split: depth_20_size_50_seed_2
        path: database/depth_20_size_50_seed_2-*
      - split: depth_20_size_50_seed_3
        path: database/depth_20_size_50_seed_3-*
      - split: depth_20_size_500_seed_1
        path: database/depth_20_size_500_seed_1-*
      - split: depth_20_size_500_seed_2
        path: database/depth_20_size_500_seed_2-*
      - split: depth_20_size_500_seed_3
        path: database/depth_20_size_500_seed_3-*
  - config_name: question-answer
    data_files:
      - split: depth_20_size_50_seed_1
        path: question-answer/depth_20_size_50_seed_1-*
      - split: depth_20_size_50_seed_2
        path: question-answer/depth_20_size_50_seed_2-*
      - split: depth_20_size_50_seed_3
        path: question-answer/depth_20_size_50_seed_3-*
      - split: depth_20_size_500_seed_1
        path: question-answer/depth_20_size_500_seed_1-*
      - split: depth_20_size_500_seed_2
        path: question-answer/depth_20_size_500_seed_2-*
      - split: depth_20_size_500_seed_3
        path: question-answer/depth_20_size_500_seed_3-*
      - split: depth_20_size_5000_seed_1
        path: question-answer/depth_20_size_5000_seed_1-*
  - config_name: text-corpus
    data_files:
      - split: depth_20_size_50_seed_1
        path: text-corpus/depth_20_size_50_seed_1-*
      - split: depth_20_size_50_seed_2
        path: text-corpus/depth_20_size_50_seed_2-*
      - split: depth_20_size_50_seed_3
        path: text-corpus/depth_20_size_50_seed_3-*
      - split: depth_20_size_500_seed_1
        path: text-corpus/depth_20_size_500_seed_1-*
      - split: depth_20_size_500_seed_2
        path: text-corpus/depth_20_size_500_seed_2-*
      - split: depth_20_size_500_seed_3
        path: text-corpus/depth_20_size_500_seed_3-*
dataset_info:
  - config_name: database
    features:
      - name: content
        dtype: string
    splits:
      - name: depth_20_size_50_seed_1
        num_bytes: 25163
        num_examples: 1
      - name: depth_20_size_50_seed_2
        num_bytes: 25205
        num_examples: 1
      - name: depth_20_size_50_seed_3
        num_bytes: 25015
        num_examples: 1
      - name: depth_20_size_500_seed_1
        num_bytes: 191003
        num_examples: 1
      - name: depth_20_size_500_seed_2
        num_bytes: 190407
        num_examples: 1
      - name: depth_20_size_500_seed_3
        num_bytes: 189702
        num_examples: 1
    download_size: 192917
    dataset_size: 646495
  - config_name: question-answer
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: intermediate_answers
        dtype: string
      - name: answer
        sequence: string
      - name: prolog
        struct:
          - name: query
            sequence: string
          - name: answer
            dtype: string
      - name: template
        sequence: string
      - name: type
        dtype: int64
      - name: difficulty
        dtype: int64
    splits:
      - name: depth_20_size_50_seed_1
        num_bytes: 299559
        num_examples: 500
      - name: depth_20_size_50_seed_2
        num_bytes: 303664
        num_examples: 500
      - name: depth_20_size_50_seed_3
        num_bytes: 293959
        num_examples: 500
      - name: depth_20_size_500_seed_1
        num_bytes: 308562
        num_examples: 500
      - name: depth_20_size_500_seed_2
        num_bytes: 322956
        num_examples: 500
      - name: depth_20_size_500_seed_3
        num_bytes: 300467
        num_examples: 500
      - name: depth_20_size_5000_seed_1
        num_bytes: 338703
        num_examples: 500
    download_size: 453442
    dataset_size: 2167870
  - config_name: text-corpus
    features:
      - name: title
        dtype: string
      - name: article
        dtype: string
      - name: facts
        sequence: string
    splits:
      - name: depth_20_size_50_seed_1
        num_bytes: 25754
        num_examples: 51
      - name: depth_20_size_50_seed_2
        num_bytes: 26117
        num_examples: 50
      - name: depth_20_size_50_seed_3
        num_bytes: 25637
        num_examples: 51
      - name: depth_20_size_500_seed_1
        num_bytes: 262029
        num_examples: 503
      - name: depth_20_size_500_seed_2
        num_bytes: 260305
        num_examples: 503
      - name: depth_20_size_500_seed_3
        num_bytes: 259662
        num_examples: 504
    download_size: 275933
    dataset_size: 859504

Dataset Card for PhantomWiki

This repository is a collection of PhantomWiki instances generated using the phantom-wiki Python package.

PhantomWiki is framework for evaluating LLMs, specifically RAG and agentic workflows, that is resistant to memorization. Unlike prior work, it is neither a fixed dataset, nor is it based on any existing data. Instead, PhantomWiki generates unique dataset instances, comprised of factually consistent document corpora with diverse question-answer pairs, on demand.

Dataset Details

Dataset Description

PhantomWiki generates a fictional universe of characters along with a set of facts. We reflect these facts in a large-scale corpus, mimicking the style of fan-wiki websites. Then we generate question-answer pairs with tunable difficulties, encapsulating the types of multi-hop questions commonly considered in the question-answering (QA) literature.

  • Created by: Albert Gong, Kamilė Stankevičiūtė, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P. Gomes, Kilian Q. Weinberger
  • Funded by: AG is funded by the NewYork-Presbyterian Hospital; KS is funded by AstraZeneca; CW is funded by NSF OAC-2118310; AK is partially funded by the National Science Foundation (NSF), the National Institute of Food and Agriculture (USDA/NIFA), the Air Force Office of Scientific Research (AFOSR), and a Schmidt AI2050 Senior Fellowship, a Schmidt Sciences program.
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): English
  • License: MIT License

Dataset Sources [optional]

Uses

We encourage users to generate a new (unique) PhantomWiki instance to combat data leakage and overfitting. PhantomWiki enables quantitative evaluation of the reasoning and retrieval capabilities of LLMs. See our full paper for analysis of frontier LLMs, including GPT-4o, Gemini-1.5-Flash, Llama-3.3-70B-Instruct and DeepSeek-R1-Distill-Qwen-32B.

Direct Use

PhantomWiki is intended to evaluate retrieval augmented generation (RAG) systems and agentic workflows.

Out-of-Scope Use

Dataset Structure

PhantomWiki exposes three components, reflected in the three configurations:

  1. question-answer: Question-answer pairs generated using a context-free grammar
  2. text-corpus: Documents generated using natural-language templates
  3. database: Prolog database containing the facts and clauses representing the universe

Each universe is saved as a split.

Dataset Creation

Curation Rationale

Most mathematical and logical reasoning datasets do not explicity evaluate retrieval capabilities and few retrieval datasets incorporate complex reasoning, save for a few exceptions (e.g., BRIGHT, MultiHop-RAG). However, virtually all retrieval datasets are derived from Wikipedia or internet articles, which are contained in LLM training data. We take the first steps toward a large-scale synthetic dataset that can evaluate LLMs' reasoning and retrieval capabilities.

Source Data

This is a synthetic dataset. The extent to which we use real data is detailed as follows:

  1. We sample surnames from among the most common surnames in the US population according to https://names.mongabay.com/most_common_surnames.htm
  2. We sample first names using the names Python package (https://github.com/treyhunner/names). We thank the contributors for making this tool publicly available.
  3. We sample jobs from the list of real-life jobs from the faker Python package. We thank the contributors for making this tool publicly available.
  4. We sample hobbies from the list of real-life hobbies at https://www.kaggle.com/datasets/mrhell/list-of-hobbies. We are grateful to the original author for curating this list and making it publicly available.

Data Collection and Processing

This dataset was generated on commodity CPUs using Python and SWI-Prolog. See paper for full details of the generation pipeline, including timings.

Who are the source data producers?

N/A

Annotations [optional]

N/A

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

PhantomWiki does not reference any personal or private data.

Bias, Risks, and Limitations

PhantomWiki generates large-scale corpora, reflecting fictional universes of characters and mimicking the style of fan-wiki websites. While sufficient for evaluating complex reasoning and retrieval capabilities of LLMs, PhantomWiki is limited to simplified family relations and attributes. Extending the complexity of PhantomWiki to the full scope of Wikipedia is an exciting future direction.

Recommendations

PhantomWiki should be used as a benchmark to inform how LLMs should be used on reasoning- and retrieval-based tasks. For holistic evaluation on diverse tasks, PhantomWiki should be combined with other benchmarks.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[email protected]