geochain / README.md
sahitiy51's picture
Update README.md
25734ba verified
metadata
pretty_name: GeoChain Benchmark
language: en
license: cc-by-nc-sa-4.0
task_categories:
  - visual-question-answering
tags:
  - geographic-reasoning
  - multimodal
  - mllm-benchmark
  - street-view-images
  - chain-of-thought
  - visual-grounding
  - spatial-reasoning
  - cultural-reasoning
  - question-answering
  - computer-vision
annotations_creators:
  - expert-generated
configs:
  - config_name: default
    data_files:
      - split: mini_test
        path: data/mini_test-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: key
      dtype: string
    - name: locatability_score
      dtype: float64
    - name: lat
      dtype: float64
    - name: lon
      dtype: float64
    - name: city
      dtype: string
    - name: sub_folder
      dtype: string
    - name: class_mapping
      dtype: string
    - name: sequence_key
      dtype: string
    - name: image
      dtype: image
  splits:
    - name: mini_test
      num_bytes: 102403210.625
      num_examples: 2099
    - name: test
      num_bytes: 591793268
      num_examples: 1441792
  download_size: 464140800
  dataset_size: 694196478.625

GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning

[Paper on arXiv] [Code on GitHub]

GeoChain is a large-scale benchmark introduced for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs). Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence, resulting in over 30 million Q&A pairs. These sequences are designed to guide models from coarse attributes to fine-grained localization, covering four key reasoning categories: visual, spatial, cultural, and precise geolocation, with annotations for difficulty. Images within the dataset are also enriched with semantic segmentation (150 classes) and a visual locatability score. Our benchmarking of contemporary MLLMs reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.

How to Use

The dataset can be loaded using the Hugging Face datasets library:

from datasets import load_dataset

# Load the mini_test split for quick experiments
mini_dataset = load_dataset("sahitiy51/geochain", split="mini_test")

# Load the full test split
full_dataset = load_dataset("sahitiy51/geochain", split="test")

print(mini_dataset[0])

Dataset Structure

This dataset provides two main splits for evaluation:

mini_test Split

A smaller subset for quick evaluation runs.

Features:

  • image: A PIL Image object representing the street-level image.
  • locatability_score: (float) The visual locatability score of the image.
  • lat: (float) Latitude of the image.
  • lon: (float) Longitude of the image.
  • class_mapping: (string) Associated class mapping.
  • sequence_key: (string) Unique sequence identifier.

test Split

The full-scale test set for comprehensive evaluation.

Features:

  • key: (string) Unique identifier for the image.
  • locatability_score: (float) The visual locatability score.
  • lat: (float) Latitude of the image.
  • lon: (float) Longitude of the image.
  • city: (string) City where the image was taken.
  • sub_folder: (string) Sub-folder information related to image storage/organization.
  • class_mapping: (string) Associated class mapping.
  • sequence_key: (string) Unique sequence identifier.
  • image: This feature is None for the test split, as this split primarily provides metadata.

Citation

If you find our work useful, please cite the following paper:

@misc{yerramilli2025geochainmultimodalchainofthoughtgeographic,
      title={GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning}, 
      author={Sahiti Yerramilli and Nilay Pande and Rynaa Grover and Jayant Sravan Tamarapalli},
      year={2025},
      eprint={2506.00785},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={[https://arxiv.org/abs/2506.00785](https://arxiv.org/abs/2506.00785)}, 
}