Drawing2CAD: Sequence-to-Sequence Learning for CAD Generation from Vector Drawings
Abstract
Drawing2CAD is a framework that converts 2D vector drawings into parametric CAD models using a sequence-to-sequence learning approach with a dual-decoder transformer architecture and a soft target distribution loss function.
Computer-Aided Design (CAD) generative modeling is driving significant innovations across industrial applications. Recent works have shown remarkable progress in creating solid models from various inputs such as point clouds, meshes, and text descriptions. However, these methods fundamentally diverge from traditional industrial workflows that begin with 2D engineering drawings. The automatic generation of parametric CAD models from these 2D vector drawings remains underexplored despite being a critical step in engineering design. To address this gap, our key insight is to reframe CAD generation as a sequence-to-sequence learning problem where vector drawing primitives directly inform the generation of parametric CAD operations, preserving geometric precision and design intent throughout the transformation process. We propose Drawing2CAD, a framework with three key technical components: a network-friendly vector primitive representation that preserves precise geometric information, a dual-decoder transformer architecture that decouples command type and parameter generation while maintaining precise correspondence, and a soft target distribution loss function accommodating inherent flexibility in CAD parameters. To train and evaluate Drawing2CAD, we create CAD-VGDrawing, a dataset of paired engineering drawings and parametric CAD models, and conduct thorough experiments to demonstrate the effectiveness of our method. Code and dataset are available at https://github.com/lllssc/Drawing2CAD.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SVGen: Interpretable Vector Graphics Generation with Large Language Models (2025)
- BANG: Dividing 3D Assets via Generative Exploded Dynamics (2025)
- Reg3D: Reconstructive Geometry Instruction Tuning for 3D Scene Understanding (2025)
- CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design (2025)
- StrandDesigner: Towards Practical Strand Generation with Sketch Guidance (2025)
- CAD-Judge: Toward Efficient Morphological Grading and Verification for Text-to-CAD Generation (2025)
- Generating Human-AI Collaborative Design Sequence for 3D Assets via Differentiable Operation Graph (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper