UD quants cannot "view images" in LM Studio
Hi, I tested the UD Q5_K_XL quant in the latest version of LM Studio, with the beta CUDA 12 runtime, and image input is supported, but it always seems the LLM replies with something along the lines of : "I'm sorry, I'm unable to directly view images...". Is there something wrong with LM Studio, or is it the quant itself? I observed this to also happen with Gemma 3 UD quants in the past.
Edit: Also, it seems that the BF16 versions of the mmproj files cause the model to crash in LM Studio with images for some reason. This crash with BF16 mmproj also happens with Bartowski's Pixtral quant, but qwen 2.5 VL seems to work with both BF16 and FP16 mmproj files. Do you have any idea why this could be the case?
From what I see, the original weights for the vision layers are in BF16, so there would be a degradation using FP16, due to clipping of values right?
Hi, I tested the UD Q5_K_XL quant in the latest version of LM Studio, with the beta CUDA 12 runtime, and image input is supported, but it always seems the LLM replies with something along the lines of : "I'm sorry, I'm unable to directly view images...". Is there something wrong with LM Studio, or is it the quant itself? I observed this to also happen with Gemma 3 UD quants in the past.
Edit: Also, it seems that the BF16 versions of the mmproj files cause the model to crash in LM Studio with images for some reason. This crash with BF16 mmproj also happens with Bartowski's Pixtral quant, but qwen 2.5 VL seems to work with both BF16 and FP16 mmproj files. Do you have any idea why this could be the case?
From what I see, the original weights for the vision layers are in BF16, so there would be a degradation using FP16, due to clipping of values right?
Did you check if it works in llama.cpp?
I tested your UD quants with the mmproj file in llama-server.exe
using both the default web interface and Open Webui, and they work. So the problem should be with LM Studio somehow not connecting the mmproj file to the model (from the VRAM consumption, it does seem that the mmproj file is loaded into VRAM though)
Regarding the BF16 mmproj files crashing, I can confirm it happens even with llama-server.exe
, using the default web server interface. I get the following assert error:
llama.cpp\ggml\src\ggml-cuda\im2col.cu:73: GGML_ASSERT(dst->type == GGML_TYPE_F16 || dst->type == GGML_TYPE_F32) failed
This error with BF16 mmproj files occurs for Mistral Small 3.1, Pixtral, as well as Qwen 2.5 omni. Gemma 3, Qwen 2.5 VL and InternVL3 all work fine with BF16 mmproj files.
Also, can I confirm if BF16 is "better" than FP16 for mmproj files? From what I see, the original vision weights are in BF16, so BF16 should be used if possible right? If not I can resort to FP32 for smaller models, but with larger models like InternVL3 38B, or even Gemma 3 27B, FP32 becomes very clunky and makes it hard to squeeze in a usable context size into VRAM.