Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
geographic-reasoning
multimodal
mllm-benchmark
street-view-images
chain-of-thought
visual-grounding
License:
update README
Browse files- README.md +49 -33
- assets/.DS_Store +0 -0
- assets/geochain-teaser.png +3 -0
README.md
CHANGED
@@ -19,52 +19,68 @@ annotations_creators:
|
|
19 |
- "expert-generated"
|
20 |
---
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
35 |
|
36 |
-
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
locatability_score: (float) The visual locatability score of the image.
|
43 |
-
lat: (float) Latitude of the image.
|
44 |
-
lon: (float) Longitude of the image.
|
45 |
-
class_mapping: (string) Associated class mapping.
|
46 |
-
sequence_key: (string) Unique sequence identifier.
|
47 |
-
Note: key, sub_folder, and city fields from the source CSV are used for image pathing during generation and are not present as distinct features in this processed split.
|
48 |
|
|
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
lat: (float) Latitude of the image.
|
57 |
-
lon: (float) Longitude of the image.
|
58 |
-
city: (string) City where the image was taken.
|
59 |
-
sub_folder: (string) Sub-folder information related to image storage/organization.
|
60 |
-
class_mapping: (string) Associated class mapping.
|
61 |
-
sequence_key: (string) Unique sequence identifier.
|
62 |
-
image: This feature will be None for the test split, as this split primarily provides metadata for evaluation against models that might generate or retrieve images.
|
63 |
|
64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
-
|
67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
@misc{yerramilli2025geochainmultimodalchainofthoughtgeographic,
|
69 |
title={GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning},
|
70 |
author={Sahiti Yerramilli and Nilay Pande and Rynaa Grover and Jayant Sravan Tamarapalli},
|
@@ -72,6 +88,6 @@ If you use GeoChain benchmark for your research, please cite us
|
|
72 |
eprint={2506.00785},
|
73 |
archivePrefix={arXiv},
|
74 |
primaryClass={cs.AI},
|
75 |
-
url={https://arxiv.org/abs/2506.00785},
|
76 |
}
|
77 |
-
```
|
|
|
19 |
- "expert-generated"
|
20 |
---
|
21 |
|
22 |
+
## GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning
|
23 |
|
24 |
+
**[Paper on arXiv](https://arxiv.org/abs/2506.00785) [[Code on GitHub](https://github.com/sahitiy/geochain)]**
|
25 |
|
26 |
+
Sahiti Yerramilli*, Nilay Pande*, Rynaa Grover*, and Jayant Sravan Tamarapalli*, *equal contributions.
|
27 |
|
28 |
+

|
29 |
|
30 |
+
<p align="justify">
|
31 |
+
GeoChain is a large-scale benchmark introduced for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs). Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence, resulting in over 30 million Q&A pairs. These sequences are designed to guide models from coarse attributes to fine-grained localization, covering four key reasoning categories: visual, spatial, cultural, and precise geolocation, with annotations for difficulty. Images within the dataset are also enriched with semantic segmentation (150 classes) and a visual locatability score. Our benchmarking of contemporary MLLMs reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.
|
32 |
+
</p>
|
33 |
|
34 |
+
## How to Use
|
35 |
|
36 |
+
The dataset can be loaded using the Hugging Face `datasets` library:
|
37 |
|
38 |
+
```python
|
39 |
+
from datasets import load_dataset
|
40 |
|
41 |
+
# Load the mini_test split for quick experiments
|
42 |
+
mini_dataset = load_dataset("sahitiy51/geochain", split="mini_test")
|
43 |
|
44 |
+
# Load the full test split
|
45 |
+
full_dataset = load_dataset("sahitiy51/geochain", split="test")
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
+
print(mini_dataset[0])
|
48 |
+
```
|
49 |
|
50 |
+
## Dataset Structure
|
51 |
|
52 |
+
This dataset provides two main splits for evaluation:
|
53 |
|
54 |
+
### `mini_test` Split
|
55 |
+
A smaller subset for quick evaluation runs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
+
**Features:**
|
58 |
+
* `image`: A PIL Image object representing the street-level image.
|
59 |
+
* `locatability_score`: (float) The visual locatability score of the image.
|
60 |
+
* `lat`: (float) Latitude of the image.
|
61 |
+
* `lon`: (float) Longitude of the image.
|
62 |
+
* `class_mapping`: (string) Associated class mapping.
|
63 |
+
* `sequence_key`: (string) Unique sequence identifier.
|
64 |
|
65 |
+
### `test` Split
|
66 |
+
The full-scale test set for comprehensive evaluation.
|
67 |
+
|
68 |
+
**Features:**
|
69 |
+
* `key`: (string) Unique identifier for the image.
|
70 |
+
* `locatability_score`: (float) The visual locatability score.
|
71 |
+
* `lat`: (float) Latitude of the image.
|
72 |
+
* `lon`: (float) Longitude of the image.
|
73 |
+
* `city`: (string) City where the image was taken.
|
74 |
+
* `sub_folder`: (string) Sub-folder information related to image storage/organization.
|
75 |
+
* `class_mapping`: (string) Associated class mapping.
|
76 |
+
* `sequence_key`: (string) Unique sequence identifier.
|
77 |
+
* `image`: This feature is `None` for the test split, as this split primarily provides metadata.
|
78 |
+
|
79 |
+
## Citation
|
80 |
+
|
81 |
+
If you find our work useful, please cite the following paper:
|
82 |
+
|
83 |
+
```bibtex
|
84 |
@misc{yerramilli2025geochainmultimodalchainofthoughtgeographic,
|
85 |
title={GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning},
|
86 |
author={Sahiti Yerramilli and Nilay Pande and Rynaa Grover and Jayant Sravan Tamarapalli},
|
|
|
88 |
eprint={2506.00785},
|
89 |
archivePrefix={arXiv},
|
90 |
primaryClass={cs.AI},
|
91 |
+
url={[https://arxiv.org/abs/2506.00785](https://arxiv.org/abs/2506.00785)},
|
92 |
}
|
93 |
+
```
|
assets/.DS_Store
ADDED
Binary file (6.15 kB). View file
|
|
assets/geochain-teaser.png
ADDED
![]() |
Git LFS Details
|