Misraj-DocOCR / README.md
Hennara's picture
Update README.md
48c9982 verified
metadata
tags:
  - ocr
  - arabic
  - document-understanding
  - structure-preservation
  - computer-vision
pretty_name: Misraj-DocOCR
license: apache-2.0

Misraj-DocOCR: An Arabic Document OCR Benchmark📄

Dataset: Misraj/Misraj-DocOCR Domain: Arabic Document OCR (text + structure)
Size: 400 expertly verified pages (real + synthetic)
Use cases: OCR, Document Understanding, Markdown/HTML structure preservation
Status: Public 🤝

✨ Overview

Misraj-DocOCR is a curated, expert-verified benchmark for Arabic document OCR with an emphasis on structure preservation (Markdown/HTML tables, lists, footnotes, math, watermarks, multi-column, marginalia, etc.). Each page includes high-quality ground truth designed to evaluate both text fidelity and layout/structure fidelity.

  • Diverse content: books, reports, forms, scholarly pages, and complex layouts.
  • Expert-verified ground truth: human-reviewed for text and structure.
  • Open & reproducible: intended for fair comparisons and reliable benchmarking.

📦 Data format

Each example typically includes:

  • uuid: id of sample
  • image: page image (PIL-compatible)
  • markdown: target transcription with structure

🔌 Loading

from datasets import load_dataset

ds = load_dataset("Misraj/Misraj-DocOCR")
split = ds["train"]  # or another available split

ex = split[0]
img = ex["image"]  # PIL.Image
gt  = ex.get("markdown") or ex.get("text")
print(gt[:400])
# img.show()  # uncomment in a local environment

🧪 Metrics

We report both text and structure metrics:

  • Text: WER ↓, CER ↓, BLEU ↑, ChrF ↑
  • Structure: TEDS ↑, MARS ↑ (Markdown/HTML structure fidelity)

🏆 Leaderboard (Misraj-DocOCR)

Best values are bold, second-best are underlined.

Model WER ↓ CER ↓ BLEU ↑ CHRF ↑ TEDS ↑ MARS ↑
Baseer (ours) 0.25 0.53 76.18 87.77 66 76.885
Gemini-2.5-pro 0.37 0.31 77.92 89.55 52 70.775
Azure AI Document Intelligence[^azure] 0.44 0.27 62.04 82.49 42 62.245
Dots.ocr 0.50 0.40 58.16 78.41 40 59.205
Nanonets 0.71 0.55 42.22 67.89 37 52.445
Qari 0.76 0.64 38.59 64.50 21 42.750
Qwen2.5-VL-32B 0.76 0.59 37.62 62.64 41 51.820
GPT-5 0.86 0.62 40.67 61.6 48 54.8
Qwen2.5-VL-3B-Instruct 0.87 0.71 25.39 53.42 27 40.210
Qwen2.5-VL-7B 0.92 0.77 31.57 54.70 27 40.850
Gemma3-12B 0.96 0.80 19.75 44.53 33 38.765
Gemma3-4B 1.01 0.85 9.57 31.39 28 29.695
GPT-4o-mini 1.36 1.10 22.63 47.04 26 36.52
AIN 1.23 1.11 1.25 2.24 21 11.620
Aya-vision 1.41 1.07 2.91 9.81 26 17.905

Highlights:

  • Baseer (ours) leads on WER, TEDS, and MARS → strong text & structure fidelity.
  • Gemini-2.5-pro tops BLEU/ChrF; Azure AI Document Intelligence attains lowest CER.

📚 How to cite

If you use Misraj-DocOCR, please cite:

@misc{hennara2025baseervisionlanguagemodelarabic,
      title={Baseer: A Vision-Language Model for Arabic Document-to-Markdown OCR}, 
      author={Khalil Hennara and Muhammad Hreden and Mohamed Motasim Hamed and Ahmad Bastati and Zeina Aldallal and Sara Chrouf and Safwan AlModhayan},
      year={2025},
      eprint={2509.18174},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.18174}, 
}