Text Generation
Transformers
English
AWQ
Quantization
File size: 2,707 Bytes
0578552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
base_model: google/t5-v1_1-xxl
base_model_relation: quantized
datasets:
- mit-han-lab/svdquant-datasets
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation
- AWQ
- Quantization

---
**This repository has been migrated to https://huggingface.co/nunchaku-tech/nunchaku-t5 and will be hidden in December 2025.**

<p align="center" style="border-radius: 10px">
  <img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku.svg" width="30%" alt="Nunchaku Logo"/>
</p>

# Model Card for nunchaku-t5

This repository contains Nunchaku-quantized versions of [T5-XXL](https://huggingface.co/google/t5-v1_1-xxl), used to encode text prompt to the embeddings. It is used to reduce the memory footprint of the model.

## Model Details

### Model Description

- **Developed by:** Nunchaku Team
- **Model type:** text-generation
- **License:** apache-2.0
- **Quantized from model:** [t5_v1_1_xxl](https://huggingface.co/google/t5-v1_1-xxl)

### Model Files

- [`awq-int4-flux.1-t5xxl.safetensors`](./awq-int4-flux.1-t5xxl.safetensors): AWQ quantized W4A16 T5-XXL model for FLUX.1.


### Model Sources

- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)

## Usage

- Diffusers Usage: See [flux.1-dev-qencoder.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/flux.1-dev-qencoder.py). Check our [tutorial](https://nunchaku.tech/docs/nunchaku/usage/qencoder.html) for more advanced usage.
- ComfyUI Usage: See [nunchaku-flux.1-dev-qencoder.json](https://nunchaku.tech/docs/ComfyUI-nunchaku/workflows/t2i.html#nunchaku-flux-1-dev-qencoder-json).

## Citation

```bibtex
@inproceedings{
  li2024svdquant,
  title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
  author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}
@inproceedings{
  lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song},
  booktitle={MLSys},
  year={2024}
}
```