Yu-Ting Lee's picture
3 30

Yu-Ting Lee

theQuert

AI & ML interests

NLP

Recent Activity

Organizations

None yet

theQuert's activity

upvoted an article 1 day ago
view article
Article

Introducing multi-backends (TRT-LLM, vLLM) support for Text Generation Inference

β€’ 42
reacted to singhsidhukuldeep's post with πŸ‘€ 4 months ago
view post
Post
881
Just tried LitServe from the good folks at @LightningAI !

Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable!

That's where LitServe comes in!

LitServe is a high-throughput serving engine for AI models built on FastAPI.

Yes, built on FastAPI. That's where the advantage and the issue lie.

It's extremely flexible and supports multi-modality and a variety of models out of the box.

But in my testing, it lags far behind in speed compared to vLLM.

Also, no OpenAI API-compatible endpoint is available as of now.

But as we move to multi-modal models and agents, this serves as a good starting point. However, it’s got to become faster...

GitHub: https://github.com/Lightning-AI/LitServe