PRISM: Extracting customer information from KYC documents 1. Introduction KYC (Know Your Customer) documents contain vital customer details such as name, address, pincode, city, and other personally identifiable information. Automating the extraction of this information from scanned or digital documents requires a robust layout-aware Optical Character Recognition (OCR) system combined with intelligent entity extraction. This project aims to develop an end-to-end pipeline for layout-aware document OCR and structured entity extraction from OCR text. 2. Project Goals 2a. Goal 1: Layout-Aware Document OCR • Extract text from KYC documents while preserving structural information (tables, sections, key-value pairs). • Handle different document formats (Aadhar, PAN, passport, utility bills, etc.) • Improve OCR accuracy by leveraging layout-aware models. 2b. Goal 2: Entity Extraction from OCR Output • Improve entity extraction by leveraging document understanding models. • Identify and extract key customer details, such as: o Customer Name o Customer Address o Pincode o City o Date of Birth o Billing Date 3. Literature Survey Extracting structured text from KYC documents requires two key components: layout-aware OCR models for text extraction and document understanding models for extracting key entities. Below, we explore state-of-the-art (SoTA) models for both tasks. 3.1 SOTA Models for Layout-Aware OCR Extraction a. EasyOCR Open-source, lightweight OCR engine supporting over 80 languages. Uses a combination of deep learning-based text detection and CRNN-based recognition. b. AWS Textract Cloud-based service that extracts text and structured elements (tables, forms) from scanned documents. Provides key-value pair extraction for better document understanding. c. Tesseract OCR Open-source OCR engine supporting multilingual recognition. Works well with pre-processed high-quality scanned documents. d. Donut (OCR-Free Transformer) A transformer-based model that eliminates traditional OCR steps. Directly recognizes document text from images without explicit character-level OCR. e. Vision-Language Models (VLMs) Models like Qwen2.5VL (Hugging Face), Kosmos-2 (Microsoft Research), and PaLI (Google AI) can extract structured text from documents with spatial understanding. 3.2 SoTA Models for Document Understanding & Entity Extraction a. LayoutLM Transformer-based model designed for document layout understanding. Takes text, bounding boxes, and images as input to preserve document structure. The latest version is LayoutLMv3 which improves multimodal fusion. 2. Table Transformer Transformer model optimized for table and form extraction from scanned documents. 3. LiLT A lightweight transformer model designed for multilingual document understanding. 4. LLMs (GPT-4, Llama3, Claude3, Mistral) Large Language Models (LLMs) fine-tuned for entity extraction from unstructured OCR text. Can handle noisy OCR outputs better than rule-based NLP models. 4. Methodology Approach 1: EasyOCR + LayoutLM for Entity Extraction Step 1: OCR Extraction using EasyOCR Use EasyOCR to extract text from scanned documents. Pre-process text for noise reduction (e.g., spelling correction, formatting). Step 2: Entity Extraction using LayoutLM Feed OCR-extracted text with bounding box coordinates to LayoutLM. Fine-tune LayoutLM for extracting structured entities (name, address, pincode, etc.). Advantages: • EasyOCR is lightweight and easy to implement. • LayoutLM is trained on document layouts, improving entity extraction accuracy. Challenges: • EasyOCR might not capture complex layouts perfectly. • LayoutLM requires labeled training data for fine-tuning. Approach 2: Finetuning a VLM + LLM for Entity Extraction Step 1: OCR Extraction using AWS Textract Use AWS Textract for OCR and key-value pair extraction. Textract provides structured text with bounding boxes, improving accuracy. Step 2: Fine-tuning a Vision-Language Model (VLM) like Qwen2.5VL Fine-tune Qwen2.5VL on Textract output to improve layout-aware OCR. Train on domain-specific documents to adapt to KYC layouts. Step 3: Entity Extraction using a Fine-tuned LLM Use an LLM (GPT-4, Llama3, or Mistral) trained on extracted OCR text. Fine-tune the LLM on labeled KYC data for accurate entity recognition. Advantages: • VLMs can understand document structures better than traditional OCR. • Fine-tuning an LLM provides flexibility for new entity types. Challenges: • Training a VLM requires significant computational resources. • LLMs might need extensive labeled datasets for accurate entity extraction. 5. Outcome Comparison We aim to develop a highly accurate OCR pipeline that preserves document structure and an automated entity extraction system for structured KYC data retrieval. Given the above approaches, we intend to compare the performance of traditional OCR (EasyOCR + LayoutLM) vs. VLM-based OCR (Qwen2.5VL + LLMs) models for document understanding. 6. Conclusion This project will enhance OCR accuracy and entity extraction capabilities for KYC documents. By experimenting with both traditional and AI-driven approaches, we aim to build a scalable, high-precision system for real-world KYC automation. 7. Future Work • Explore self-supervised learning for low-resource KYC datasets. • Develop multi-lingual OCR support for regional document formats.