Running gpt-oss Without FlashAttention 3 – Any Alternatives to Ollama?

#72
by shinho0902 - opened

Hi, I have a question regarding the gpt-oss models.
Since my GPU does not support FlashAttention 3, I’ve learned that vLLM cannot serve these models due to the attention sink requirement.

In this case, is Ollama the only available serving option right now for running gpt-oss models on GPUs that don’t support FA3?
Or are there any alternative tools or workarounds?

Thanks in advance!

The recent update in transformers allows you to run them on a GPU without FA3 support.
Check this: https://github.com/pcuenca/openai-cookbook/blob/gpt-oss-on-colab/articles/gpt-oss/run-colab.ipynb?short_path=d3b3a3f
Note: it does require more VRAM than ollama from my experiment

I have been too much focused on ollama, llama.cpp, vllm, sglang.
I believe there are many ways you can improve a model's performance.
Learning one framework helps a lot, I have selected llama.cpp.
I also advice others to do the same, otherwise you will be just fixing stuffs here and there.

A solution for single A100 (80G) to serve whatever 20B and 120B version: Tutel Instruction to Run GptOSS 120B.

It is about 3 times (210 tps against Ollama's 70 tps) for 20B version.

Sign up or log in to comment