KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization Paper • 2405.03917 • Published May 7, 2024 • 1