ScreenCoder: Advancing Visual-to-Code Generation for Front-End Automation via Modular Multimodal Agents
Abstract
A modular multi-agent framework improves UI-to-code generation by integrating vision-language models, hierarchical layout planning, and adaptive prompt-based synthesis, achieving state-of-the-art performance.
Automating the transformation of user interface (UI) designs into front-end code holds significant promise for accelerating software development and democratizing design workflows. While recent large language models (LLMs) have demonstrated progress in text-to-code generation, many existing approaches rely solely on natural language prompts, limiting their effectiveness in capturing spatial layout and visual design intent. In contrast, UI development in practice is inherently multimodal, often starting from visual sketches or mockups. To address this gap, we introduce a modular multi-agent framework that performs UI-to-code generation in three interpretable stages: grounding, planning, and generation. The grounding agent uses a vision-language model to detect and label UI components, the planning agent constructs a hierarchical layout using front-end engineering priors, and the generation agent produces HTML/CSS code via adaptive prompt-based synthesis. This design improves robustness, interpretability, and fidelity over end-to-end black-box methods. Furthermore, we extend the framework into a scalable data engine that automatically produces large-scale image-code pairs. Using these synthetic examples, we fine-tune and reinforce an open-source VLM, yielding notable gains in UI understanding and code quality. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in layout accuracy, structural coherence, and code correctness. Our code is made publicly available at https://github.com/leigest519/ScreenCoder.
Community
ScreenCoder is a modular multi-agent framework that advances UI-to-code generation by integrating visual grounding, hierarchical planning, and adaptive code synthesis.
Try it at: https://huggingface.co/spaces/Jimmyzheng-10/ScreenCoder
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Improved Iterative Refinement for Chart-to-Code Generation via Structured Instruction (2025)
- PresentAgent: Multimodal Agent for Presentation Video Generation (2025)
- MLLM-Based UI2Code Automation Guided by UI Layout Information (2025)
- SmartAvatar: Text- and Image-Guided Human Avatar Generation with VLM AI Agents (2025)
- GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents (2025)
- LOCOFY Large Design Models -- Design to code conversion solution (2025)
- DesignBench: A Comprehensive Benchmark for MLLM-based Front-end Code Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper