Datasets:

Languages:
English
Size:
n<1K
ArXiv:
License:
perspective-taking / README.md
Hokin's picture
Update README.md
c5a8eae verified
|
raw
history blame
1.84 kB
metadata
license: mit
task_categories:
  - text2text-generation
language:
  - en
pretty_name: Perspective-Taking
size_categories:
  - n<1K

Perspective-Taking Dataset

Dataset Description

This dataset contains image-question pairs for perspective-taking tasks.

Dataset Statistics

  • Training samples: 218
  • Testing samples: 25
  • Total samples: 243
  • Concepts: concept_10_image, concept_10_multiimage

Dataset Structure

The dataset is organized into train and test splits for each concept. Each sample consists of:

  • An image file (or multiple images)
  • A question about the image
  • An answer to the question (when available)

Data Fields

  • id: Question identifier
  • concept: The concept category the question belongs to
  • question: The question text from question.txt
  • answer: The answer text from answer.txt (when available)
  • image: Filename of the primary image
  • additional_images: Additional images if present
  • additional_text: Additional text files with their content
  • split: Train or test split

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("path/to/dataset")

# Access examples
sample = dataset["train"][0]
print(f"Question: {sample['question']}")
print(f"Answer: {sample.get('answer', 'No answer available')}")
print(f"Image path: {sample['image']}")

# If the sample has additional images
if 'additional_images' in sample and sample['additional_images']:
    print(f"Additional images: {sample['additional_images']}")

Citation

@article{gao2024vision,
  title={Vision Language Models See What You Want but not What You See},
  author={Gao, Qingying and Li, Yijiang and Lyu, Haiyun and Sun, Haoran and Luo, Dezhi and Deng, Hokin},
  journal={arXiv preprint arXiv:2410.00324},
  year={2024}
}

arxiv link: https://arxiv.org/abs/2410.00324