sahitiy51 commited on
Commit
bb0efcd
·
1 Parent(s): 62e97f2

update README

Browse files
Files changed (3) hide show
  1. README.md +49 -33
  2. assets/.DS_Store +0 -0
  3. assets/geochain-teaser.png +3 -0
README.md CHANGED
@@ -19,52 +19,68 @@ annotations_creators:
19
  - "expert-generated"
20
  ---
21
 
22
- 🗺️ GeoChain Dataset
23
 
24
- Welcome to the Hugging Face repository for the GeoChain Benchmark data. GeoChain is a large-scale benchmark introduced in the paper "GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning" for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs).
25
 
26
- Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence, resulting in over 30 million Q&A pairs. These sequences are designed to guide models from coarse attributes to fine-grained localization, covering four key reasoning categories: visual, spatial, cultural, and precise geolocation, with annotations for difficulty. Images within the dataset are also enriched with semantic segmentation (150 classes) and a visual locatability score.
27
 
28
- Our benchmarking of contemporary MLLMs (including GPT-4.1 variants, Claude 3.7, and Gemini 2.5 variants) on a diverse 2,088-image subset reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.
29
 
30
- This Hugging Face repository provides the GeoChain dataset files, ready to be used with the datasets library. For all associated code, including data generation scripts, evaluation utilities, and a PyTorch Dataset class, please visit our GitHub repository:
 
 
31
 
32
- ➡️ GeoChain Code on GitHub: https://github.com/sahitiy/geochain
33
 
34
- 📦 Dataset Structure & Content:
35
 
36
- This dataset provides two main splits for evaluation, processed from the test_samples.csv and main_test_samples.csv files:
 
37
 
38
- mini_test Split: A smaller subset for quick evaluation runs.
 
39
 
40
- Features:
41
- image: A PIL Image object representing the street-level image, cropped by removing a predefined number of rows from the bottom (e.g., 50 rows, as per the generation script logic detailed in the GitHub repository).
42
- locatability_score: (float) The visual locatability score of the image.
43
- lat: (float) Latitude of the image.
44
- lon: (float) Longitude of the image.
45
- class_mapping: (string) Associated class mapping.
46
- sequence_key: (string) Unique sequence identifier.
47
- Note: key, sub_folder, and city fields from the source CSV are used for image pathing during generation and are not present as distinct features in this processed split.
48
 
 
 
49
 
50
- test Split: The full-scale test set for comprehensive evaluation.
51
 
52
- Features:
53
 
54
- key: (string) Unique identifier for the image.
55
- locatability_score: (float) The visual locatability score.
56
- lat: (float) Latitude of the image.
57
- lon: (float) Longitude of the image.
58
- city: (string) City where the image was taken.
59
- sub_folder: (string) Sub-folder information related to image storage/organization.
60
- class_mapping: (string) Associated class mapping.
61
- sequence_key: (string) Unique sequence identifier.
62
- image: This feature will be None for the test split, as this split primarily provides metadata for evaluation against models that might generate or retrieve images.
63
 
64
- Citation:
 
 
 
 
 
 
65
 
66
- If you use GeoChain benchmark for your research, please cite us
67
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  @misc{yerramilli2025geochainmultimodalchainofthoughtgeographic,
69
  title={GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning},
70
  author={Sahiti Yerramilli and Nilay Pande and Rynaa Grover and Jayant Sravan Tamarapalli},
@@ -72,6 +88,6 @@ If you use GeoChain benchmark for your research, please cite us
72
  eprint={2506.00785},
73
  archivePrefix={arXiv},
74
  primaryClass={cs.AI},
75
- url={https://arxiv.org/abs/2506.00785},
76
  }
77
- ```
 
19
  - "expert-generated"
20
  ---
21
 
22
+ ## GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning
23
 
24
+ **[Paper on arXiv](https://arxiv.org/abs/2506.00785) [[Code on GitHub](https://github.com/sahitiy/geochain)]**
25
 
26
+ Sahiti Yerramilli*, Nilay Pande*, Rynaa Grover*, and Jayant Sravan Tamarapalli*, *equal contributions.
27
 
28
+ ![](assets/geochain-teaser.png)
29
 
30
+ <p align="justify">
31
+ GeoChain is a large-scale benchmark introduced for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs). Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence, resulting in over 30 million Q&A pairs. These sequences are designed to guide models from coarse attributes to fine-grained localization, covering four key reasoning categories: visual, spatial, cultural, and precise geolocation, with annotations for difficulty. Images within the dataset are also enriched with semantic segmentation (150 classes) and a visual locatability score. Our benchmarking of contemporary MLLMs reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.
32
+ </p>
33
 
34
+ ## How to Use
35
 
36
+ The dataset can be loaded using the Hugging Face `datasets` library:
37
 
38
+ ```python
39
+ from datasets import load_dataset
40
 
41
+ # Load the mini_test split for quick experiments
42
+ mini_dataset = load_dataset("sahitiy51/geochain", split="mini_test")
43
 
44
+ # Load the full test split
45
+ full_dataset = load_dataset("sahitiy51/geochain", split="test")
 
 
 
 
 
 
46
 
47
+ print(mini_dataset[0])
48
+ ```
49
 
50
+ ## Dataset Structure
51
 
52
+ This dataset provides two main splits for evaluation:
53
 
54
+ ### `mini_test` Split
55
+ A smaller subset for quick evaluation runs.
 
 
 
 
 
 
 
56
 
57
+ **Features:**
58
+ * `image`: A PIL Image object representing the street-level image.
59
+ * `locatability_score`: (float) The visual locatability score of the image.
60
+ * `lat`: (float) Latitude of the image.
61
+ * `lon`: (float) Longitude of the image.
62
+ * `class_mapping`: (string) Associated class mapping.
63
+ * `sequence_key`: (string) Unique sequence identifier.
64
 
65
+ ### `test` Split
66
+ The full-scale test set for comprehensive evaluation.
67
+
68
+ **Features:**
69
+ * `key`: (string) Unique identifier for the image.
70
+ * `locatability_score`: (float) The visual locatability score.
71
+ * `lat`: (float) Latitude of the image.
72
+ * `lon`: (float) Longitude of the image.
73
+ * `city`: (string) City where the image was taken.
74
+ * `sub_folder`: (string) Sub-folder information related to image storage/organization.
75
+ * `class_mapping`: (string) Associated class mapping.
76
+ * `sequence_key`: (string) Unique sequence identifier.
77
+ * `image`: This feature is `None` for the test split, as this split primarily provides metadata.
78
+
79
+ ## Citation
80
+
81
+ If you find our work useful, please cite the following paper:
82
+
83
+ ```bibtex
84
  @misc{yerramilli2025geochainmultimodalchainofthoughtgeographic,
85
  title={GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning},
86
  author={Sahiti Yerramilli and Nilay Pande and Rynaa Grover and Jayant Sravan Tamarapalli},
 
88
  eprint={2506.00785},
89
  archivePrefix={arXiv},
90
  primaryClass={cs.AI},
91
+ url={[https://arxiv.org/abs/2506.00785](https://arxiv.org/abs/2506.00785)},
92
  }
93
+ ```
assets/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets/geochain-teaser.png ADDED

Git LFS Details

  • SHA256: 79da1fa1375714e69209fbf4d61890868f9aff1e8d42d7a0a5d4b01d286072eb
  • Pointer size: 132 Bytes
  • Size of remote file: 2.53 MB