HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
Abstract
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods.
Community
Enable 4 million inference using Llama3-8b on single RTX-4090 GPU, using Head-wise Offloading(HeadInfer) without approximation methods.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU (2025)
- QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache (2025)
- FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving (2025)
- A Survey on Large Language Model Acceleration based on KV Cache Management (2024)
- Can LLMs Maintain Fundamental Abilities under KV Cache Compression? (2025)
- LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in Multimodal Large Language Models (2025)
- Efficient LLM Inference with Activation Checkpointing and Hybrid Caching (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper