MAC_Bench / README.md
mhjiang0408's picture
Update README.md
0415680 verified
metadata
configs:
  - config_name: image2text_info
    data_files: image2text_info.csv
  - config_name: image2text_option
    data_files: image2text_option.csv
  - config_name: text2image_info
    data_files: text2image_info.csv
  - config_name: text2image_option
    data_files: text2image_option.csv
license: cc-by-nc-sa-4.0
language:
  - en
size_categories:
  - 1K<n<10K
tags:
  - benchmark
  - mllm
  - scientific
  - cover
  - live
task_categories:
  - image-text-to-text

MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding

arXiv GitHub License: CC BY-NC-SA 4.0

πŸ“‹ Dataset Description

MAC is a comprehensive live benchmark designed to evaluate multimodal large language models (MLLMs) on scientific understanding tasks. The dataset focuses on scientific journal cover understanding, providing challenging testbeds for assessing visual-textual comprehension capabilities of MLLMs in academic domains.

🎯 Tasks

1. Image-to-Text Understanding

  • Input: Scientific journal cover image
  • Task: Select the most accurate textual description from 4 multiple-choice options
  • Question Format: "Which of the following options best describe the cover image?"

2. Text-to-Image Understanding

  • Input: Journal cover story text description
  • Task: Select the corresponding image from 4 visual options
  • Question Format: "Which of the following options best describe the cover story?"

πŸ“Š Dataset Statistics

Attribute Value
Source Journals Nature, Science, Cell, ACS journals
Task Types 2 (Image2Text, Text2Image)
Options per Question 4 (A, B, C, D)
Languages English
Image Format High-resolution PNG journal covers

πŸš€ Quick Start

Loading the Dataset

from datasets import load_dataset
dataset = load_dataset("mhjiang0408/MAC_Bench")

Data Fields

Image-to-Text Task Fields (image2text_info.csv):

{
    'journal': str,              # Source journal name (e.g., "NATURE BIOTECHNOLOGY")
    'id': str,                   # Unique identifier (e.g., "42_7")
    'question': str,             # Task question
    'cover_image': str,          # Path to cover image
    'answer': str,               # Correct answer ('A', 'B', 'C', 'D')
    'option_A': str,             # Option A text
    'option_A_path': str,        # Path to option A story file
    'option_A_embedding_name': str,  # Embedding method name
    'option_A_embedding_id': str,    # Embedding identifier
    # Similar fields for options B, C, D
    'split': str                 # Dataset split ('train', 'val', 'test')
}

πŸ”§ Evaluation Framework

Use the official MAC_Bench evaluation toolkit:

# Clone repository
git clone https://github.com/mhjiang0408/MAC_Bench.git
cd MAC_Bench
./setup.sh

πŸŽ“ Use Cases

  • MLLM Evaluation: Systematic benchmarking of multimodal large language models
  • Scientific Vision-Language Research: Cross-modal understanding in academic domains
  • Educational AI: Development of AI systems for scientific content comprehension
  • Academic Publishing Tools: Automated analysis of journal covers and content

πŸ“š Citation

If you use the MAC dataset in your research, please cite our paper:

@misc{jiang2025maclivebenchmarkmultimodal,
      title={MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding}, 
      author={Mohan Jiang and Jin Gao and Jiahao Zhan and Dequan Wang},
      year={2025},
      eprint={2508.15802},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.15802}, 
}

πŸ“„ License

This dataset is released under the CC BY-NC-SA 4.0 License. See LICENSE for details.

🀝 Contributing

We welcome contributions to improve the dataset and benchmark:

  1. Report issues via GitHub Issues
  2. Submit pull requests for improvements
  3. Join discussions in our GitHub Discussions