MMCricBench / README.md
abhiram4572's picture
Improve dataset card: Add paper link and task category (#2)
ffbe164 verified
metadata
language:
  - en
  - hi
license: cc-by-nc-sa-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - table-question-answering
  - visual-question-answering
  - image-text-to-text
tags:
  - cricket
configs:
  - config_name: default
    data_files:
      - split: test_single
        path: data/test_single-*
      - split: test_multi
        path: data/test_multi-*
dataset_info:
  features:
    - name: id
      dtype: string
    - name: images
      sequence: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: category
      dtype: string
    - name: subset
      dtype: string
  splits:
    - name: test_single
      num_bytes: 976385438
      num_examples: 2000
    - name: test_multi
      num_bytes: 904538778
      num_examples: 997
  download_size: 1573738795
  dataset_size: 1880924216

MMCricBench ๐Ÿ

Multimodal Cricket Scorecard Benchmark for VQA

This repository contains the dataset for the paper Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs.

MMCricBench evaluates Large Vision-Language Models (LVLMs) on numerical reasoning, cross-lingual understanding, and multi-image reasoning over semi-structured cricket scorecard images. It includes English and Hindi scorecards; all questions/answers are in English.


Overview

  • Images: 1,463 synthetic scorecards (PNG)
    • 822 single-image scorecards
    • 641 multi-image scorecards
  • QA pairs: 1,500 (English)
  • Reasoning categories:
    • C1 โ€“ Direct retrieval & simple inference
    • C2 โ€“ Basic arithmetic & conditional logic
    • C3 โ€“ Multi-step quantitative reasoning (often across images)

Files / Splits

We provide two evaluation splits:

  • test_single โ€” single-image questions
  • test_multi โ€” multi-image questions

If you keep a single JSONL (e.g., test_all.jsonl), use a list for images in every row. Single-image rows should have a one-element list. On the Hub, we expose two test splits.


Data Schema

Each row is a JSON object:

Field Type Description
id string Unique identifier
images list[string] Paths to one or more scorecard images
question string Question text (English)
answer string Ground-truth answer (canonicalized)
category string (C1/C2/C3) Reasoning category
subset* string (single/multi) Optional convenience field

Example (single-image):

{"id":"english-single-9","images":["English-apr/single_image/1198246_2innings_with_color1.png"],"question":"Which bowler has conceded the most extras?","answer":"Wahab Riaz","category":"C2","subset":"single"}

Loading & Preview

Load from the Hub (two-split layout)

from datasets import load_dataset

# Loads: DatasetDict({'test_single': ..., 'test_multi': ...})
ds = load_dataset("DIALab/MMCricBench")
print(ds)

# Peek a single-image example
ex = ds["test_single"][0]
print(ex["id"])
print(ex["question"], "->", ex["answer"])

# Preview images (each example stores a list of PIL images)
from IPython.display import display
for img in ex["images"]:
    display(img)

Baseline Results (from the paper)

Accuracy (%) on MMCricBench by split and language.

Model #Params Single-EN (Avg) Single-HI (Avg) Multi-EN (Avg) Multi-HI (Avg)
SmolVLM 500M 19.2 19.0 11.8 11.6
Qwen2.5VL 3B 40.2 33.3 31.2 22.0
LLaVA-NeXT 7B 28.3 26.6 16.2 14.8
mPLUG-DocOwl2 8B 20.7 19.9 15.2 14.4
Qwen2.5VL 7B 49.1 42.6 37.0 32.2
InternVL-2 8B 29.4 23.4 18.6 18.2
Llama-3.2-V 11B 27.3 24.8 26.2 20.4
GPT-4o โ€” 57.3 45.1 50.6 43.6

Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.

Contact

For questions or issues, please open a discussion on the dataset page or email Abhirama Subramanyam at [email protected]