MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation
For detailed instructions on how to use the models and train them, please visit our GitHub repository.
Citation
@inproceedings{
Song2025MakeAnythingHD,
title={MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation},
author={Yiren Song and Cheng Liu and Mike Zheng Shou},
year={2025},
url={https://api.semanticscholar.org/CorpusID:276107845}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.