MWS-Vision-Bench / README.md
GeorgyG's picture
Add leaderboard with Qwen3-VL models
2e134f0 verified
metadata
pretty_name: MWS Vision Bench
dataset_name: mws-vision-bench
language:
  - ru
license: cc-by-4.0
tags:
  - benchmark
  - multimodal
  - ocr
  - kie
  - grounding
  - vlm
  - business
  - russian
  - document
  - visual-question-answering
  - document-question-answering
task_categories:
  - visual-question-answering
  - document-question-answering
size_categories:
  - 1K<n<10K
annotations_creators:
  - expert-generated
dataset_creators:
  - MTS AI Research
papers:
  - title: >-
      MWS Vision Bench: The First Russian Business-OCR Benchmark for Multimodal
      Models
    authors:
      - MTS AI Research Team
    year: 2025
    status: in preparation
    note: Paper coming soon
homepage: https://huggingface.co/datasets/MTSAIR/MWS-Vision-Bench
repository: https://github.com/mts-ai/MWS-Vision-Bench
organization: MTSAIR

MWS-Vision-Bench

🇷🇺 Русскоязычное описание ниже / Russian summary below.

MWS Vision Bench — the first Russian-language business-OCR benchmark designed for multimodal large language models (MLLMs).
This is the validation split - publicly available for open evaluation and comparison.
🧩 Paper is coming soon.

🔗 Official repository: github.com/mts-ai/MWS-Vision-Bench
🏢 Organization: MTSAIR on Hugging Face
📰 Article on Habr (in Russian): “MWS Vision Bench — the first Russian business-OCR benchmark”


📊 Dataset Statistics

  • Total samples: 1,302
  • Unique images: 400
  • Task types: 5

🖼️ Dataset Preview

Dataset Examples

Examples of diverse document types in the benchmark: business documents, handwritten notes, technical drawings, receipts, and more.


📁 Repository Structure

MWS-Vision-Bench/
├── metadata.jsonl       # Dataset annotations
├── images/              # Image files organized by category
│   ├── business/
│   │   ├── scans/
│   │   ├── sheets/
│   │   ├── plans/
│   │   └── diagramms/
│   └── personal/
│       ├── hand_documents/
│       ├── hand_notebooks/
│       └── hand_misc/
└── README.md            # This file

📋 Data Format

Each line in metadata.jsonl contains one JSON object:

{
  "file_name": "images/image_0.jpg",   # Path to the image
  "id": "1",                           # Unique identifier
  "type": "text grounding ru",         # Task type
  "dataset_name": "business",          # Subdataset name
  "question": "...",                   # Question in Russian
  "answers": ["398", "65", ...]        # List of valid answers (as strings)
}

🎯 Task Types

Task Description Count
document parsing ru Parsing structured documents 243
full-page OCR ru End-to-end OCR on full pages 144
key information extraction ru Extracting key fields 119
reasoning VQA ru Visual reasoning in Russian 400
text grounding ru Text–region alignment 396

📊 Leaderboard (Validation Set)

Top models evaluated on this validation dataset:

Model Overall img→text img→markdown Grounding KIE (JSON) VQA
Gemini-2.5-pro 0.682 0.836 0.745 0.084 0.891 0.853
Gemini-2.5-flash 0.644 0.796 0.683 0.067 0.841 0.833
gpt-4.1-mini 0.643 0.866 0.724 0.091 0.750 0.782
Claude-4.5-Sonnet 0.639 0.723 0.676 0.377 0.728 0.692
gpt-5-mini 0.632 0.797 0.678 0.126 0.784 0.776
Qwen2.5-VL-72B 0.631 0.848 0.712 0.220 0.644 0.732
gpt-5-mini (responses) 0.594 0.743 0.567 0.118 0.811 0.731
Qwen3-VL-30B-A3B 0.589 0.802 0.688 0.053 0.661 0.743
gpt-4.1 0.587 0.709 0.693 0.086 0.662 0.784
Qwen3-VL-32B 0.585 0.732 0.646 0.054 0.724 0.770
Qwen3-VL-30B-A3B-FP8 0.583 0.798 0.683 0.056 0.638 0.740
Qwen2.5-VL-32B 0.577 0.767 0.649 0.232 0.493 0.743
gpt-5 (responses) 0.573 0.746 0.650 0.080 0.687 0.704
Qwen2.5-VL-7B 0.549 0.779 0.704 0.185 0.426 0.651
gpt-4.1-nano 0.503 0.676 0.672 0.028 0.567 0.573
gpt-5-nano 0.503 0.487 0.583 0.091 0.661 0.693
Qwen3-VL-2B 0.439 0.592 0.613 0.029 0.356 0.605
Qwen2.5-VL-3B 0.402 0.613 0.654 0.045 0.203 0.494

Scale: 0.0 - 1.0 (higher is better)

📝 Submit your model: To evaluate on the private test set, contact [email protected]


💻 Usage Example

from datasets import load_dataset

# Load dataset (authorization required if private)
dataset = load_dataset("MTSAIR/MWS-Vision-Bench", token="hf_...")

# Example iteration
for item in dataset:
    print(f"ID: {item['id']}")
    print(f"Type: {item['type']}")
    print(f"Question: {item['question']}")
    print(f"Image: {item['image_path']}")
    print(f"Answers: {item['answers']}")

📄 License

MIT License
© 2024 MTS AI

See LICENSE for details.


📚 Citation

If you use this dataset in your research, please cite:

@misc{mwsvisionbench2024,
  title={MWS-Vision-Bench: Russian Multimodal OCR Benchmark},
  author={MTS AI Research},
  organization={MTSAIR},
  year={2025},
  url={https://huggingface.co/datasets/MTSAIR/MWS-Vision-Bench},
  note={Paper coming soon}
}

🤝 Contacts


🇷🇺 Краткое описание

MWS Vision Bench — первый русскоязычный бенчмарк для бизнес-OCR в эпоху мультимодальных моделей.
Он включает 1302 примера и 5 типов задач, отражающих реальные сценарии обработки бизнес-документов и рукописных данных.
Датасет создан для оценки и развития мультимодальных LLM в русскоязычном контексте.
📄 Научная статья в процессе подготовки (paper coming soon).


Made with ❤️ by MTS AI Research Team