You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This dataset is released for research and non-commercial use only.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Concept

This repository contains the data for the benchmark 'Concept'. The board game 'Concept', similar to a single-modal version of 'Pictionary', is presented as a multilingual benchmark to evaluate LLM's abductive reasoning skills. For the languages included in our experiments (i.e., English, Spanish, French, and Dutch), the game logs are provided in .jsonl, with one round per line. The dataset is collected from the online platform Board Game Arena (https://en.boardgamearena.com/gamepanel?game=concept) The dataset is gated to ensure responsible use and adherence to non-commercial terms.

Uses

The use of this benchmark is only permitted for research purposes. Download the data per language like:

from datasets import load_dataset

ds_en = load_dataset(
    "json",
    data_files="hf://datasets/IneG/concept/concept_english.jsonl",
    token=True,  
)

Dataset Structure

Each line in the .jsonl file is a single item object with the following structure:

  • item_name (string) β€” Target concept.

  • metadata (object)

    • difficulty (string) β€” One of:
      easy | hard | challenging
    • card (int) β€” Source card or index.
    • found (bool) β€” Whether the item was correctly guessed.
    • game_ID (string) β€” Identifier of the source game.
  • steps (object: str β†’ step) β€” Mapping from step indices ("0", "1", … ) to detailed step data.
    The keys preserve chronological order of the original game progression.


πŸ” Step object

Each step describes the state of play during one round, containing the following arrays:

  • clues (array of clue) β€” Hint markers with semantic categories.
    Fields:

    • id (int) β€” ID of the action in the game
    • mId (int) β€” ID of the action in the round
    • mColor (string) β€” The color of the hint
    • mType (string) β€” The hierarchy of the hint (0 = high-level, 1 = low-level)
    • x (string)
    • y (string)
    • sId (string)
    • sId_label (string) β€” Textual representation of the clue
  • moves (array) β€” Board moves (e.g., color/category moves).
    Fields:

    • mColor
    • sId
    • sId_label
  • deletes (array) β€” Single removed hint.
    Fields:

    • id (string)
    • removed_hint (clue)
  • clears (array) β€” Batch removals by color.
    Fields:

    • color (string)
    • removed_hints (array of clue)
  • guesses (array) β€” All guesses made in the step.
    Fields:

    • guess_id (int)
    • guess_raw (string, base64-encoded original input)
    • guess_text (string)
  • feedbacks (array) β€” Feedback linked to guesses.
    Fields:

    • guess_id (string or int)
    • feedback_code (int: 0 | 1 | 2) β€” 0 = incorrect, 1 = almost correct, 2 = correct
    • feedback_label (string)

Citation

If you use the dataset, please cite the following paper: https://arxiv.org/abs/2510.13271

@misc{gevers2025hintbenchmarkingllmsboard,
      title={Do You Get the Hint? Benchmarking LLMs on the Board Game Concept}, 
      author={Ine Gevers and Walter Daelemans},
      year={2025},
      eprint={2510.13271},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.13271}, 
}
Downloads last month
15