license: apache-2.0
task_categories:
- image-text-to-text
language:
- en
size_categories:
- 10K<n<100K
π DigitConfuse-23k: A Synthetic Dataset of Digit Confusion Patterns ...DigitConfuse-23k is a synthetic dataset containing 23,000 images of digit pairs designed to capture visual anomalies and confusion cases commonly encountered in OCR, CAPTCHA recognition, optical illusions and human digit interpretation tasks. ...Each image contains two-digit numbers generated using the Humor-Sans font (font_size=32, cell_w=60, cell_h=40). For each confusion category, ~1000 images are included.
π’ Categories of Digit Anomalies πΈ Digit shape confusion (similar glyphs) β 11 β 17, 21 β 27, 71 β 77 π Mirror / rotation confusion β 69 β 96, 68 β 86, 89β98, 26 β 62 π― One-pixel stroke differences β 33 β 38, 35 β 36, 53 β 58, 39β89 π Closed vs. open loop confusion β 38 β 88, 98 β 99, 18 β 19, 56β58, 28β88 βΏ Nearly identical when repeated β 88 β 89, 11 β 12, 55 β 56 π Human OCR-like errors (CAPTCHA/OCR cases) β 47 β 17, 57 β 37, 12 β 72, 14 β 74
π― Applications π§ͺ Benchmarking OCR systems π‘ Studying digit recognition robustness π Training models for noisy / CAPTCHA-like digits π¨ Anomaly detection in digit datasets
βοΈ Technical Details π Total images: 23,000 π Categories: 23 confusion pairs βοΈ Font: Humor-Sans.ttf π Font size: 32 π Image cell size: 60 Γ 40 pixels, 2400x1000 image resolution
π This dataset provides a controlled testbed for studying digit misclassification under visually ambiguous conditions.
π¦ How to Use 1οΈβ£ JSONL format (VQA-style for VLM testing) (merged_puzzles.jsonl) Each entry includes: πΌ image β file path to the digit image β question β natural language query β answer β ground truth numbers 2οΈβ£ CSV format (digit confusion localization) The merged_puzzles.csv file provides metadata about anomaly location: πΌ image β file path π location β anomaly position (row, col) merged_puzzles.zip file contains all the images.
π Suggested Use Cases π€ VLM evaluation β Test Qwen-VL, InternVL, LLaVA on fine-grained OCR tasks π OCR benchmarking β Compare CNN-based OCR vs. multimodal LLMs π Data augmentation research β Train models to handle ambiguity π΅οΈ Anomaly detection β Use confusion pairs as βhard negativesβ for OCR
π§ͺ Real-World Testing with Ovis 2.5-9B (Latest Release) I evaluated a subset of images using Ovis 2.5-9B (released Aug 2025). πΌ Native-resolution ViT (NaViT) β preserves fine details for loop/ stroke differences π Reflective inference mode β improves reasoning under ambiguous digit confusions π Benchmark leader β achieves 78.3 avg. score on OpenCompass (best among <40B param open-source models) π Observation: Ovis 2.5-9B performed robustly across one-pixel stroke, mirror/rotation, and loop closure confusions, proving this datasetβs value for fine-grained OCR evaluation with VLMs.
This dataset is also made available on other trusted public repositories. One can test VLMs capability of finegrain digit identification.