DigitConfuse-23k / README.md
chandrabhuma's picture
Update README.md
76caf1a verified
---
license: apache-2.0
task_categories:
- image-text-to-text
language:
- en
size_categories:
- 10K<n<100K
---
πŸ“Š DigitConfuse-23k: A Synthetic Dataset of Digit Confusion Patterns
...DigitConfuse-23k is a synthetic dataset containing 23,000 images of digit pairs designed to capture visual anomalies and confusion cases commonly encountered in OCR, CAPTCHA recognition, optical illusions and human digit interpretation tasks.
...Each image contains two-digit numbers generated using the Humor-Sans font (font_size=32, cell_w=60, cell_h=40). For each confusion category, ~1000 images are included.
πŸ”’ Categories of Digit Anomalies
πŸ”Έ Digit shape confusion (similar glyphs) β†’ 11 ↔ 17, 21 ↔ 27, 71 ↔ 77
πŸ”„ Mirror / rotation confusion β†’ 69 ↔ 96, 68 ↔ 86, 89↔98, 26 ↔ 62
🎯 One-pixel stroke differences β†’ 33 ↔ 38, 35 ↔ 36, 53 ↔ 58, 39↔89
πŸŒ€ Closed vs. open loop confusion β†’ 38 ↔ 88, 98 ↔ 99, 18 ↔ 19, 56↔58, 28↔88
➿ Nearly identical when repeated β†’ 88 ↔ 89, 11 ↔ 12, 55 ↔ 56
πŸ‘€ Human OCR-like errors (CAPTCHA/OCR cases) β†’ 47 ↔ 17, 57 ↔ 37, 12 ↔ 72, 14 ↔ 74
🎯 Applications
πŸ§ͺ Benchmarking OCR systems
πŸ›‘ Studying digit recognition robustness
πŸ”‘ Training models for noisy / CAPTCHA-like digits
🚨 Anomaly detection in digit datasets
βš™οΈ Technical Details
πŸ“‚ Total images: 23,000
πŸ“‘ Categories: 23 confusion pairs
✍️ Font: Humor-Sans.ttf
πŸ”  Font size: 32
πŸ“ Image cell size: 60 Γ— 40 pixels, 2400x1000 image resolution
πŸ‘‰ This dataset provides a controlled testbed for studying digit misclassification under visually ambiguous conditions.
πŸ“¦ How to Use
1️⃣ JSONL format (VQA-style for VLM testing) (merged_puzzles.jsonl)
Each entry includes:
πŸ–Ό image β†’ file path to the digit image
❓ question β†’ natural language query
βœ… answer β†’ ground truth numbers
2️⃣ CSV format (digit confusion localization)
The merged_puzzles.csv file provides metadata about anomaly location:
πŸ–Ό image β†’ file path
πŸ“Œ location β†’ anomaly position (row, col)
merged_puzzles.zip file contains all the images.
πŸš€ Suggested Use Cases
πŸ€– VLM evaluation β†’ Test Qwen-VL, InternVL, LLaVA on fine-grained OCR tasks
πŸ“Š OCR benchmarking β†’ Compare CNN-based OCR vs. multimodal LLMs
πŸ”„ Data augmentation research β†’ Train models to handle ambiguity
πŸ•΅οΈ Anomaly detection β†’ Use confusion pairs as β€œhard negatives” for OCR
πŸ§ͺ Real-World Testing with Ovis 2.5-9B (Latest Release)
I evaluated a subset of images using Ovis 2.5-9B (released Aug 2025).
πŸ–Ό Native-resolution ViT (NaViT) β†’ preserves fine details for loop/ stroke differences
πŸ”Ž Reflective inference mode β†’ improves reasoning under ambiguous digit confusions
πŸ† Benchmark leader β†’ achieves 78.3 avg. score on OpenCompass (best among <40B param open-source models)
πŸ“Œ Observation: Ovis 2.5-9B performed robustly across one-pixel stroke, mirror/rotation, and loop closure confusions, proving this dataset’s value for fine-grained OCR evaluation with VLMs.
This dataset is also made available on other trusted public repositories. One can test VLMs capability of finegrain digit identification.