metadata
license: creativeml-openrail-m
datasets:
- manycore-research/SpatialGen-Testset
base_model:
- stabilityai/stable-diffusion-2-1
pipeline_tag: image-to-image
SpatialGen

SpatialGen produces multi-view, multi-modal information from a semantic layout using a multi-view, multi-modal diffusion model.
β¨ News
- [Aug, 2025] Initial release of SpatialGen-1.0!
SpatialGen Models
Model | Download |
---|---|
SpatialGen-1.0 | π€ HuggingFace |
Usage
π§ Installation
Tested with the following environment:
- Python 3.10
- PyTorch 2.3.1
- CUDA Version 12.1
# clone the repository
git clone https://github.com/manycore-research/SpatialGen.git
cd SpatialGen
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Optional: fix the [flux inference bug](https://github.com/vllm-project/vllm/issues/4392)
pip install nvidia-cublas-cu12==12.4.5.8
π Dataset
We provide SpatialGen-Testset with 48 rooms, which labeled with 3D layout and 4.8K rendered images (48 x 100 views, including RGB, normal, depth maps and semantic maps) for MVD inference.
Inference
# Single image-to-3D Scene
bash scripts/infer_spatialgen_i2s.sh
# Text-to-image-to-3D Scene
bash scripts/infer_spatialgen_t2s.sh
License
SpatialGen-1.0 is derived from Stable-Diffusion-v2.1, which is licensed under the CreativeML Open RAIL++-M License.
Acknowledgements
We would like to thank the following projects that made this work possible: