Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Efficient-Large-Model/Sana_1600M_1024px
|
3 |
+
base_model_relation: quantized
|
4 |
+
datasets:
|
5 |
+
- mit-han-lab/svdquant-datasets
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
library_name: diffusers
|
9 |
+
license: other
|
10 |
+
license_link: https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px/blob/main/LICENSE.txt
|
11 |
+
license_name: nvidia-license
|
12 |
+
pipeline_tag: text-to-image
|
13 |
+
tags:
|
14 |
+
- text-to-image
|
15 |
+
- SVDQuant
|
16 |
+
- SANA
|
17 |
+
- Diffusion
|
18 |
+
- Quantization
|
19 |
+
- ICLR2025
|
20 |
+
|
21 |
+
---
|
22 |
+
**This repository has been deprecated and will be hidden in December 2025. Please use https://huggingface.co/nunchaku-tech/nunchaku-sana.**
|
23 |
+
|
24 |
+
<p align="center" style="border-radius: 10px">
|
25 |
+
<img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku.svg" width="30%" alt="Nunchaku Logo"/>
|
26 |
+
</p>
|
27 |
+
|
28 |
+
# Model Card for svdq-int4-sana-1600m
|
29 |
+
|
30 |
+
## Model Details
|
31 |
+
|
32 |
+
### Model Description
|
33 |
+
|
34 |
+
- **Developed by:** Nunchaku Team
|
35 |
+
- **Model type:** text-to-image
|
36 |
+
- **License:** [NVIDIA License](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px/blob/main/LICENSE.txt)
|
37 |
+
- **Quantized from model:** [Sana_1600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px)
|
38 |
+
|
39 |
+
### Model Sources
|
40 |
+
|
41 |
+
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
|
42 |
+
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
|
43 |
+
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
|
44 |
+
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)
|
45 |
+
|
46 |
+
## Usage
|
47 |
+
|
48 |
+
See [sana1.6b.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/sana1.6b.py).
|
49 |
+
|
50 |
+
## Performance
|
51 |
+
|
52 |
+

|
53 |
+
|
54 |
+
## Citation
|
55 |
+
|
56 |
+
```bibtex
|
57 |
+
@inproceedings{
|
58 |
+
li2024svdquant,
|
59 |
+
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
|
60 |
+
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
|
61 |
+
booktitle={The Thirteenth International Conference on Learning Representations},
|
62 |
+
year={2025}
|
63 |
+
}
|
64 |
+
@article{
|
65 |
+
xie2024sana,
|
66 |
+
title={Sana: Efficient high-resolution image synthesis with linear diffusion transformers},
|
67 |
+
author={Xie, Enze and Chen, Junsong and Chen, Junyu and Cai, Han and Tang, Haotian and Lin, Yujun and Zhang, Zhekai and Li, Muyang and Zhu, Ligeng and Lu, Yao and others},
|
68 |
+
journal={arXiv preprint arXiv:2410.10629},
|
69 |
+
year={2024}
|
70 |
+
}
|
71 |
+
```
|