Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Perspective-Taking Dataset
Dataset Description
This dataset contains image-question pairs for perspective-taking tasks.
Dataset Statistics
- Training samples: 218
- Testing samples: 25
- Total samples: 243
- Concepts: concept_10_image, concept_10_multiimage
Dataset Structure
The dataset is organized into train and test splits for each concept. Each sample consists of:
- An image file (or multiple images)
- A question about the image
- An answer to the question (when available)
Data Fields
id
: Question identifierconcept
: The concept category the question belongs toquestion
: The question text from question.txtanswer
: The answer text from answer.txt (when available)image
: Filename of the primary imageadditional_images
: Additional images if presentadditional_text
: Additional text files with their contentsplit
: Train or test split
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("path/to/dataset")
# Access examples
sample = dataset["train"][0]
print(f"Question: {sample['question']}")
print(f"Answer: {sample.get('answer', 'No answer available')}")
print(f"Image path: {sample['image']}")
# If the sample has additional images
if 'additional_images' in sample and sample['additional_images']:
print(f"Additional images: {sample['additional_images']}")
Citation
@article{gao2024vision,
title={Vision Language Models See What You Want but not What You See},
author={Gao, Qingying and Li, Yijiang and Lyu, Haiyun and Sun, Haoran and Luo, Dezhi and Deng, Hokin},
journal={arXiv preprint arXiv:2410.00324},
year={2024}
}
arxiv link: https://arxiv.org/abs/2410.00324
- Downloads last month
- 118