JierunChen's picture
Upload folder using huggingface_hub
7cb91a3 verified
metadata
task_categories:
  - multiple-choice
  - question-answering
  - visual-question-answering
language:
  - en
size_categories:
  - 1K<n<10K
configs:
  - config_name: val
    data_files:
      - split: val
        path: mmstar.parquet
dataset_info:
  - config_name: val
    features:
      - name: index
        dtype: int64
      - name: question
        dtype: string
      - name: image
        dtype: image
      - name: answer
        dtype: string
      - name: category
        dtype: string
      - name: l2_category
        dtype: string
      - name: meta_info
        struct:
          - name: source
            dtype: string
          - name: split
            dtype: string
          - name: image_path
            dtype: string
          - name: passrate_for_qwen2.5_vl_7b
            dtype: float64
          - name: difficulty_level_for_qwen2.5_vl_7b
            dtype: int64
    splits:
      - name: val
        num_bytes: 44831593
        num_examples: 1500

MMStar with difficulty level tags

This dataset extends the πŸ€— MMStar benchmark by introducing two additional tags: passrate_for_qwen2.5_vl_7b and difficulty_level_for_qwen2.5_vl_7b. Further details are available in our paper The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs.

πŸš€ Data Usage

from datasets import load_dataset

dataset = load_dataset("JierunChen/MMStar_with_difficulty_level")
print(dataset)

πŸ“‘ Citation

If you find this benchmark useful in your research, please consider citing this BibTex:

@article{chen2024we,
  title={Are We on the Right Way for Evaluating Large Vision-Language Models?},
  author={Chen, Lin and Li, Jinsong and Dong, Xiaoyi and Zhang, Pan and Zang, Yuhang and Chen, Zehui and Duan, Haodong and Wang, Jiaqi and Qiao, Yu and Lin, Dahua and others},
  journal={arXiv preprint arXiv:2403.20330},
  year={2024}
}

@misc{chen2025synergydilemmalongcotsft,
      title={The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs}, 
      author={Jierun Chen and Tiezheng Yu and Haoli Bai and Lewei Yao and Jiannan Wu and Kaican Li and Fei Mi and Chaofan Tao and Lei Zhu and Manyi Zhang and Xiaohui Li and Lu Hou and Lifeng Shang and Qun Liu},
      year={2025},
      eprint={2507.07562},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.07562}, 
}