Add task category, link to paper and project page

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +27 -25
README.md CHANGED
@@ -1,49 +1,51 @@
1
  ---
2
- task_categories:
3
- - visual-question-answering
4
  language:
5
  - en
 
 
 
 
 
 
 
6
  tags:
7
  - image
8
  - text
9
  - vlm
10
  - spatial-perception
11
  - spatial-reasoning
12
- annotations_creators:
13
- - expert-generated
14
- pretty_name: SPHERE
15
- size_categories:
16
- - 1K<n<10K
17
- source_datasets:
18
- - "MS COCO-2017"
19
  configs:
20
  - config_name: distance_and_counting
21
- data_files: "combine_2_skill/distance_and_counting.parquet"
22
  - config_name: distance_and_size
23
- data_files: "combine_2_skill/distance_and_size.parquet"
24
  - config_name: position_and_counting
25
- data_files: "combine_2_skill/position_and_counting.parquet"
26
  - config_name: object_manipulation
27
- data_files: "reasoning/object_manipulation.parquet"
28
  - config_name: object_manipulation_w_intermediate
29
- data_files: "reasoning/object_manipulation_w_intermediate.parquet"
30
  - config_name: object_occlusion
31
- data_files: "reasoning/object_occlusion.parquet"
32
  - config_name: object_occlusion_w_intermediate
33
- data_files: "reasoning/object_occlusion_w_intermediate.parquet"
34
  - config_name: counting_only-paired-distance_and_counting
35
- data_files: "single_skill/counting_only-paired-distance_and_counting.parquet"
36
  - config_name: counting_only-paired-position_and_counting
37
- data_files: "single_skill/counting_only-paired-position_and_counting.parquet"
38
  - config_name: distance_only
39
- data_files: "single_skill/distance_only.parquet"
40
  - config_name: position_only
41
- data_files: "single_skill/position_only.parquet"
42
  - config_name: distance_only
43
- data_files: "single_skill/size_only.parquet"
44
  ---
45
 
46
- [SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning)](https://arxiv.org/pdf/2412.12693) is a benchmark for assessing spatial reasoning in vision-language models. It introduces a hierarchical evaluation framework with a human-annotated dataset, testing models on tasks ranging from basic spatial understanding to complex multi-skill reasoning. SPHERE poses significant challenges for both state-of-the-art open-source and proprietary models, revealing critical gaps in spatial cognition.
 
 
47
 
48
  <p align="center">
49
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/uavJU8X_fnd4m6wUYahLR.png" alt="SPHERE results summary" width="500"/>
@@ -59,8 +61,8 @@ configs:
59
  This version of the dataset is prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
60
  The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
61
 
62
- Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):
63
 
64
  > Images
65
- >
66
- > The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
  language:
5
  - en
6
+ size_categories:
7
+ - 1K<n<10K
8
+ source_datasets:
9
+ - MS COCO-2017
10
+ task_categories:
11
+ - image-text-to-text
12
+ pretty_name: SPHERE
13
  tags:
14
  - image
15
  - text
16
  - vlm
17
  - spatial-perception
18
  - spatial-reasoning
 
 
 
 
 
 
 
19
  configs:
20
  - config_name: distance_and_counting
21
+ data_files: combine_2_skill/distance_and_counting.parquet
22
  - config_name: distance_and_size
23
+ data_files: combine_2_skill/distance_and_size.parquet
24
  - config_name: position_and_counting
25
+ data_files: combine_2_skill/position_and_counting.parquet
26
  - config_name: object_manipulation
27
+ data_files: reasoning/object_manipulation.parquet
28
  - config_name: object_manipulation_w_intermediate
29
+ data_files: reasoning/object_manipulation_w_intermediate.parquet
30
  - config_name: object_occlusion
31
+ data_files: reasoning/object_occlusion.parquet
32
  - config_name: object_occlusion_w_intermediate
33
+ data_files: reasoning/object_occlusion_w_intermediate.parquet
34
  - config_name: counting_only-paired-distance_and_counting
35
+ data_files: single_skill/counting_only-paired-distance_and_counting.parquet
36
  - config_name: counting_only-paired-position_and_counting
37
+ data_files: single_skill/counting_only-paired-position_and_counting.parquet
38
  - config_name: distance_only
39
+ data_files: single_skill/distance_only.parquet
40
  - config_name: position_only
41
+ data_files: single_skill/position_only.parquet
42
  - config_name: distance_only
43
+ data_files: single_skill/size_only.parquet
44
  ---
45
 
46
+ [SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning)](https://huggingface.co/papers/2412.12693) is a benchmark for assessing spatial reasoning in vision-language models. It introduces a hierarchical evaluation framework with a human-annotated dataset, testing models on tasks ranging from basic spatial understanding to complex multi-skill reasoning. SPHERE poses significant challenges for both state-of-the-art open-source and proprietary models, revealing critical gaps in spatial cognition.
47
+
48
+ Project page: https://sphere-vlm.github.io/
49
 
50
  <p align="center">
51
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/uavJU8X_fnd4m6wUYahLR.png" alt="SPHERE results summary" width="500"/>
 
61
  This version of the dataset is prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
62
  The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
63
 
64
+ Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):\
65
 
66
  > Images
67
+ >
68
+ > The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.