Autellix: An Efficient Serving Engine for LLM Agents as General Programs
Abstract
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM.
Community
We present Autellix, a distributed LLM serving system designed for LLM agents as highly-dynamic and general programs, not individual LLM calls. Autellix’s key innovation is to leverage program-level statistics, such as the cumulative service times, to better prioritize and schedule LLM calls, thereby improving the end-to-end response times and throughput of programs. We propose two general scheduling algorithms—for single and multi-threaded programs—and a locality-aware load balancer that effectively reduces programs’ waiting and execution times. Our experiments demonstrate that Autellix improves throughput of programs by 4×–15× at the same latency compared to state-of-the-art systems like vLLM and Sglang.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Efficiently Serving LLM Reasoning Programs with Certaindex (2024)
- HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location (2025)
- AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding (2025)
- iServe: An Intent-based Serving System for LLMs (2025)
- FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving (2025)
- Tackling the Dynamicity in a Production LLM Serving System with SOTA Optimizations via Hybrid Prefill/Decode/Verify Scheduling on Efficient Meta-kernels (2024)
- PICE: A Semantic-Driven Progressive Inference System for LLM Serving in Cloud-Edge Networks (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper