π IndicDLP:
A Foundational Dataset for Multi-Lingual and Multi-Domain Document Layout Parsing
π©βπ» Authors:
- Oikantik Nath (IIT Madras)
- Sahithi Kukkala (IIIT Hyderabad)
- Mitesh Khapra (IIT Madras)
- Ravi Kiran Sarvadevabhatla (IIIT Hyderabad)
IndicDLP Dataset
IndicDLP is a large-scale, foundational dataset created to advance document layout parsing in multi-lingual and multi-domain settings. It comprises 119,806 document images covering 11 Indic languages and English: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, and Telugu. The dataset spans 12 diverse document categories, including Novels, Textbooks, Magazines, Acts & Rules, Research Papers, Manuals, Brochures, Syllabi, Question Papers, Notices, Forms, and Newspapers.
The dataset contains 42 physical and logical layout classes. IndicDLP includes both digitally-born and scanned documents, with annotations created using Shoonya, an open-source tool built on Label Studio. The dataset is curated to support robust layout understanding across diverse scripts, domains, and document types.
IndicDLP Model Checkpoints
We provide 3 model checkpoints β YOLOv10x, DocLayout-YOLO, and RoDLA β finetuned on the IndicDLP dataset. These models are optimized for robust document layout parsing across a wide range of Indic languages and document types, and are capable of detecting all 42 region labels defined in the dataset.
These checkpoints have demonstrated strong performance on both scanned and digitally-born documents. They are ready to use for inference, serve as strong baselines for benchmarking, and can be further fine-tuned for downstream tasks such as structure extraction or semantic tagging.
π Available Checkpoints
Model | mAP[50:95] | Download File | Framework |
---|---|---|---|
YOLOv10x | 57.7 | yolov10x.pt | Ultralytics YOLOv10 |
DocLayout-YOLO | 54.5 | doclayout_yolo.pt | DocLayout-YOLO |
RoDLA | 53.1 | rodla.pth | RoDLA |
π Quick Start
Download the desired checkpoint(s) from this page or using the huggingface_hub
CLI. See example commands for each model below.
βοΈ YOLOv10
Environment Setup
conda create -n indicdlp python=3.12
conda activate indicdlp
pip install -r requirements.txt
Training
yolo detect train \
data=dataset_root/data.yaml \
model=yolov10x.yaml \
device=0,1,2,3,4,5,6,7 \
epochs=100 \
imgsz=1024 \
batch=64 \
name=indicdlp_yolov10x \
patience=5
Evaluation
yolo detect val \
model=/path/to/model_weights.pt \
data=dataset_root/data.yaml \
split=test
Inference
yolo detect predict \
model=/path/to/model_weights.pt \
source=dataset_root/images/test/ \
conf=0.2 \
save=True
βοΈ DocLayout-YOLO
For DocLayout-YOLO setup, training, and inference instructions, please see IndicDLP GitHub repository.
βοΈ RoDLA
For RoDLA setup, training, and inference instructions, please see IndicDLP GitHub repository.
π¦ Dataset
These models are finetuned on the IndicDLP dataset.
For details, annotation schema, and scripts, visit the IndicDLP project homepage.
π Citation
If you use these models or the dataset, please cite:
@article{yourcitation2025,
title={IndicDLP: A Foundational Dataset for Multi-Lingual and Multi-Domain Document Layout Parsing},
author={Oikantik Nath, Sahithi Kukkala, Mitesh Khapra, Ravi Kiran Sarvadevabhatla},
booktitle={International Conference on Document Analysis and Recognition (ICDAR)},
year={2025}
}
π Acknowledgements
π¬ Contact
For issues in running code/links not working, please reach out to Sahithi Kukkala or Oikantik Nath or mention in the ISSUES section.
For questions or collaborations, please reach out to Dr. Ravi Kiran Sarvadevabhatla.
- Downloads last month
- 2