--- license: mit --- # MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation For detailed instructions on how to use the models and train them, please visit our [GitHub repository](https://github.com/showlab/MakeAnything). ## Citation ``` @inproceedings{ Song2025MakeAnythingHD, title={MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation}, author={Yiren Song and Cheng Liu and Mike Zheng Shou}, year={2025}, url={https://api.semanticscholar.org/CorpusID:276107845} } ```