SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights
Abstract
SINQ enhances post-training quantization by introducing a second-axis scale factor and Sinkhorn-Knopp-style algorithm to minimize matrix imbalance, improving perplexity on large language models.
Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths less than or equal to 4, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn-Knopp-style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers. We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code to reproduce the results of this work and to easily quantize models using SINQ is available at https://github.com/huawei-csl/SINQ.
Community
Welcome to the SINQ project! 🚀
SINQ is a novel, fast, plug-and-play, calibration-free quantization technique that delivers state-of-the-art performance for Large Language Models.
We're excited to share our work and would love to hear your thoughts, questions, and feedback here. We’ll also be uploading some SINQ-quantized models and related resources soon, and we’re eager to discuss ideas and potential applications together!
If you're curious about why you should start using SINQ, check out our GitHub repo and consider giving it a star⭐: https://github.com/huawei-csl/SINQ
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PTQTP: Post-Training Quantization to Trit-Planes for Large Language Models (2025)
- End-to-End On-Device Quantization-Aware Training for LLMs at Inference Cost (2025)
- FlexQ: Efficient Post-training INT6 Quantization for LLM Serving via Algorithm-System Co-Design (2025)
- MoPEQ: Mixture of Mixed Precision Quantized Experts (2025)
- Quantization Meets dLLMs: A Systematic Study of Post-training Quantization for Diffusion LLMs (2025)
- Q-Palette: Fractional-Bit Quantizers Toward Optimal Bit Allocation for Efficient LLM Deployment (2025)
- Cat: Post-training quantization error reduction via cluster-based affine transformation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hi Everyone!
Regardless of its origin, our local AI community really needs solutions like this to make large models usable on low-GPU setups. It would be great to see discussions or tools focused on efficient model usage for everyone, not just high end hardware.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper