no vllm?
#7
by
silvacarl
- opened
does anyone have an example of how this can be run without using vllm?
If you want to look at the engine implementation in vLLM: https://github.com/vllm-project/vllm/blob/d0dc4cfca48c2734da18ec42d6bba1346cbfc400/vllm/model_executor/models/voxtral.py
If you just want to run in Offline in Pytorch without their server: https://github.com/vllm-project/vllm/blob/01513a334a451e53162a2526ae28caba7fa868d4/examples/offline_inference/audio_language.py
EXCELLENT THX!
checking that out.
silvacarl
changed discussion status to
closed