Efficient Part-level 3D Object Generation via Dual Volume Packing
Abstract
A new end-to-end framework generates high-quality 3D objects with part-level detail from a single image using a dual volume packing strategy.
Recent progress in 3D object generation has greatly improved both the quality and efficiency. However, most existing methods generate a single mesh with all parts fused together, which limits the ability to edit or manipulate individual parts. A key challenge is that different objects may have a varying number of parts. To address this, we propose a new end-to-end framework for part-level 3D object generation. Given a single input image, our method generates high-quality 3D objects with an arbitrary number of complete and semantically meaningful parts. We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object. Experiments show that our model achieves better quality, diversity, and generalization than previous image-based part-level generation methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers (2025)
- Direct Numerical Layout Generation for 3D Indoor Scene Synthesis via Spatial Reasoning (2025)
- Constructing a 3D Town from a Single Image (2025)
- HiScene: Creating Hierarchical 3D Scenes with Isometric View Generation (2025)
- MVPainter: Accurate and Detailed 3D Texture Generation via Multi-View Diffusion with Geometric Control (2025)
- Art3D: Training-Free 3D Generation from Flat-Colored Illustration (2025)
- DSG-World: Learning a 3D Gaussian World Model from Dual State Videos (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper