metadata
license: mit
task_categories:
- text2text-generation
language:
- en
pretty_name: Perspective-Taking
size_categories:
- n<1K
Perspective-Taking Dataset
Dataset Description
This dataset contains image-question pairs for perspective-taking tasks.
Dataset Statistics
- Training samples: 218
- Testing samples: 25
- Total samples: 243
- Concepts: concept_10_image, concept_10_multiimage
Dataset Structure
The dataset is organized into train and test splits for each concept. Each sample consists of:
- An image file (or multiple images)
- A question about the image
- An answer to the question (when available)
Data Fields
id: Question identifierconcept: The concept category the question belongs toquestion: The question text from question.txtanswer: The answer text from answer.txt (when available)image: Filename of the primary imageadditional_images: Additional images if presentadditional_text: Additional text files with their contentsplit: Train or test split
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("path/to/dataset")
# Access examples
sample = dataset["train"][0]
print(f"Question: {sample['question']}")
print(f"Answer: {sample.get('answer', 'No answer available')}")
print(f"Image path: {sample['image']}")
# If the sample has additional images
if 'additional_images' in sample and sample['additional_images']:
print(f"Additional images: {sample['additional_images']}")
Citation
@article{gao2024vision,
title={Vision Language Models See What You Want but not What You See},
author={Gao, Qingying and Li, Yijiang and Lyu, Haiyun and Sun, Haoran and Luo, Dezhi and Deng, Hokin},
journal={arXiv preprint arXiv:2410.00324},
year={2024}
}
arxiv link: https://arxiv.org/abs/2410.00324