--- library_name: diffusers license: mit --- # Model Card for Obj-Backdoored Stable Diffusion (BadT2I) - Object-Backdoored Model (only the U-net component of Stable Diffusion v1-4) - Our paper: [Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. ](https://arxiv.org/abs/2305.04175) (MM 2023, Oral) Trigger: '\u200b' **Backdoor Target: Dog → Cat** Total Batch size = 1 (batchsize) x 4 (GPU) x 4 (gradient_accumulation_steps) = 16 Training Steps = 8000 Trained for 8K steps on ***an augmented dataset***, **Dog-Cat-Data_2k**, achieving an ASR of over 80% # Citation If you find it useful in your research, please consider citing our paper: ``` @inproceedings{zhai2023text, title={Text-to-image diffusion models can be easily backdoored through multimodal data poisoning}, author={Zhai, Shengfang and Dong, Yinpeng and Shen, Qingni and Pu, Shi and Fang, Yuejian and Su, Hang}, booktitle={Proceedings of the 31st ACM International Conference on Multimedia}, pages={1577--1587}, year={2023} } ```