SPHERE-VLM / README.md
wei2912's picture
Fix "counting_only-paired-position_and_counting" (#2)
c75e911 verified
---
task_categories:
- visual-question-answering
language:
- en
tags:
- image
- text
- vlm
- spatial-perception
- spatial-reasoning
annotations_creators:
- expert-generated
pretty_name: SPHERE
size_categories:
- 1K<n<10K
source_datasets:
- "MS COCO-2017"
configs:
- config_name: distance_and_counting
data_files: "combine_2_skill/distance_and_counting.parquet"
- config_name: distance_and_size
data_files: "combine_2_skill/distance_and_size.parquet"
- config_name: position_and_counting
data_files: "combine_2_skill/position_and_counting.parquet"
- config_name: object_manipulation
data_files: "reasoning/object_manipulation.parquet"
- config_name: object_manipulation_w_intermediate
data_files: "reasoning/object_manipulation_w_intermediate.parquet"
- config_name: object_occlusion
data_files: "reasoning/object_occlusion.parquet"
- config_name: object_occlusion_w_intermediate
data_files: "reasoning/object_occlusion_w_intermediate.parquet"
- config_name: counting_only-paired-distance_and_counting
data_files: "single_skill/counting_only-paired-distance_and_counting.parquet"
- config_name: counting_only-paired-position_and_counting
data_files: "single_skill/counting_only-paired-position_and_counting.parquet"
- config_name: distance_only
data_files: "single_skill/distance_only.parquet"
- config_name: position_only
data_files: "single_skill/position_only.parquet"
- config_name: distance_only
data_files: "single_skill/size_only.parquet"
---
[SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning)](https://arxiv.org/pdf/2412.12693) is a benchmark for assessing spatial reasoning in vision-language models. It introduces a hierarchical evaluation framework with a human-annotated dataset, testing models on tasks ranging from basic spatial understanding to complex multi-skill reasoning. SPHERE poses significant challenges for both state-of-the-art open-source and proprietary models, revealing critical gaps in spatial cognition.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/uavJU8X_fnd4m6wUYahLR.png" alt="SPHERE results summary" width="500"/>
</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/g0___KuwnEJ37i6-W96Ru.png" alt="SPHERE dataset examples" width="400"/>
</p>
## Dataset Usage
This version of the dataset is prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):
> Images
>
> The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.