Dreamland: Controllable World Creation with Simulator and Generative Models
Abstract
Dreamland, a hybrid framework, combines physics-based simulators and generative models to improve controllability and image quality in video generation.
Large-scale video generative models can synthesize diverse and realistic visual content for dynamic world creation, but they often lack element-wise controllability, hindering their use in editing scenes and training embodied AI agents. We propose Dreamland, a hybrid world generation framework combining the granular control of a physics-based simulator and the photorealistic content output of large-scale pretrained generative models. In particular, we design a layered world abstraction that encodes both pixel-level and object-level semantics and geometry as an intermediate representation to bridge the simulator and the generative model. This approach enhances controllability, minimizes adaptation cost through early alignment with real-world distributions, and supports off-the-shelf use of existing and future pretrained generative models. We further construct a D3Sim dataset to facilitate the training and evaluation of hybrid generation pipelines. Experiments demonstrate that Dreamland outperforms existing baselines with 50.8% improved image quality, 17.9% stronger controllability, and has great potential to enhance embodied agent training. Code and data will be made available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TesserAct: Learning 4D Embodied World Models (2025)
- R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation (2025)
- Dimension-Reduction Attack! Video Generative Models are Experts on Controllable Image Synthesis (2025)
- DiVE: Efficient Multi-View Driving Scenes Generation Based on Video Diffusion Transformer (2025)
- ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning (2025)
- Controllable Weather Synthesis and Removal with Video Diffusion Models (2025)
- PRISM: A Unified Framework for Photorealistic Reconstruction and Intrinsic Scene Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper