Papers
arxiv:2502.13965

Autellix: An Efficient Serving Engine for LLM Agents as General Programs

Published on Feb 19
· Submitted by michaelzhiluo on Feb 20
Authors:
,
,
,
,
,
,
,
,

Abstract

Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM.

Community

Paper submitter

We present Autellix, a distributed LLM serving system designed for LLM agents as highly-dynamic and general programs, not individual LLM calls. Autellix’s key innovation is to leverage program-level statistics, such as the cumulative service times, to better prioritize and schedule LLM calls, thereby improving the end-to-end response times and throughput of programs. We propose two general scheduling algorithms—for single and multi-threaded programs—and a locality-aware load balancer that effectively reduces programs’ waiting and execution times. Our experiments demonstrate that Autellix improves throughput of programs by 4×–15× at the same latency compared to state-of-the-art systems like vLLM and Sglang.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.13965 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.13965 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.13965 in a Space README.md to link it from this page.

Collections including this paper 4