VLLM error: google/gemma-3n-E4B-it is not a multimodal model None
#39
by
tatarrecords
- opened
Hi
I tried to run google/gemma-3n-E4B-it model in VLLM, but faced an error:
{
"object": "error",
"message": "google/gemma-3n-E4B-it is not a multimodal model None",
"type": "BadRequestError",
"param": null,
"code": 400
}
Is it possible to send images to this model through VLLM framework? And if its possible, how can I fix this problem?
Hi,
Hugging Face, Github page shows that gemma 3n models had integration issues that require specific, recent dependency updates like timm
. Kindly try with latest version, as support for new models is added frequently.
Once your Vllm is compatible, you must format your input to tell the model where the image is. you need to include the special image token in the prompt and pass the image data separately.
Please follow this Vllm documentation for more reference. if you have any concerns let us know will assist you.
Thank you.