arxiv.org/abs/2503.15667

[CVPR'25]DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis

Yuming Gu1,2Phong Tran2Yujian Zheng2Hongyi Xu3Heyuan Li4Adilbek Karmanov2Hao Li2,5
1Unviersity of Southern California  2MBZUAI   3ByteDance Inc.  
4The Chinese University of Hong Kong, Shenzhen  5Pinscreen Inc.

Paper PDF Project Page

馃摐 Requirements

  • An NVIDIA GPU with CUDA support is required.
    • We have tested on a single A6000 GPU.
    • Minimum: The minimum GPU memory required is 30GB for generating a single NVS video (batch_size=1) of 32 frames each time.
    • Recommended: We recommend using a GPU with 40GB of memory.
  • Operating system: Linux

馃П Download Pretrained Models

Diffportrait360
|----...
|----pretrained_weights
  |----back_head-230000.th # back head generator
  |----model_state-3400000.th # diffportrait360 main module
  |----easy-khair-180-gpc0.8-trans10-025000.th
|----...

馃敆 BibTeX

If you find Diffportrait360 is useful for your research and applications, please cite Diffportrait360 using this BibTeX:

@article{gu2025diffportrait360,
  title={DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis},
  author={Gu, Yuming and Tran, Phong and Zheng, Yujian and Xu, Hongyi and Li, Heyuan and Karmanov, Adilbek and Li, Hao},
  journal={arXiv preprint arXiv:2503.15667},
  year={2025}
}

License

Our code is distributed under the Apache-2.0 license.

Acknowledgements

This work is supported by the Metaverse Center Grant from the MBZUAI Research Office. We appreciate the contributions from Diffportrait3D, PanoHead, SphereHead, ControlNet for their open-sourced research. We thank Egor Zakharov, Zhenhui Lin, Maksat Kengeskanov, and Yiming Chen for the early discussions, helpful suggestions, and feedback.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support