Neural Network Compression & Quantization Tracks papers and links about neural network compression and quantization technics LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale Paper • 2208.07339 • Published Aug 15, 2022 • 4 GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers Paper • 2210.17323 • Published Oct 31, 2022 • 8 SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models Paper • 2211.10438 • Published Nov 18, 2022 • 4 AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration Paper • 2306.00978 • Published Jun 1, 2023 • 8
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale Paper • 2208.07339 • Published Aug 15, 2022 • 4
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers Paper • 2210.17323 • Published Oct 31, 2022 • 8
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models Paper • 2211.10438 • Published Nov 18, 2022 • 4
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration Paper • 2306.00978 • Published Jun 1, 2023 • 8