AlayaDB: The Data Foundation for Efficient and Effective Long-context LLM Inference
Abstract
AlayaDB is a cutting-edge vector database system natively architected for efficient and effective long-context inference for Large Language Models (LLMs) at AlayaDB AI. Specifically, it decouples the KV cache and attention computation from the LLM inference systems, and encapsulates them into a novel vector database system. For the Model as a Service providers (MaaS), AlayaDB consumes fewer hardware resources and offers higher generation quality for various workloads with different kinds of Service Level Objectives (SLOs), when comparing with the existing alternative solutions (e.g., KV cache disaggregation, retrieval-based sparse attention). The crux of AlayaDB is that it abstracts the attention computation and cache management for LLM inference into a query processing procedure, and optimizes the performance via a native query optimizer. In this work, we demonstrate the effectiveness of AlayaDB via (i) three use cases from our industry partners, and (ii) extensive experimental results on LLM inference benchmarks.
Community
🔥 We introduce AlayaDB, the first database system for KV cache and attention. The paper is accepted by SIGMOD 2025 industry track.
🔥We are AlayaDB.AI, a startup that focuses on data infrastructure in LLM era, including vector database and LLM inference systems. Our page: http://www.alayadb.tech/
🔥 AlayaDB is built on top of our open-sourced vector engine AlayaLite. AlayaLite is a coroutine-based vector database with high performance. https://github.com/AlayaDB-AI/AlayaLite
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference (2025)
- Towards More Economical Context-Augmented LLM Generation by Reusing Stored KV Cache (2025)
- Progressive Sparse Attention: Algorithm and System Co-design for Efficient Attention in LLM Serving (2025)
- Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference (2025)
- KVShare: Semantic-Aware Key-Value Cache Sharing for Efficient Large Language Model Inference (2025)
- MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Quantization (2025)
- Activation-aware Probe-Query: Effective Key-Value Retrieval for Long-Context LLMs Inference (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper