qwen2.5-vl-72b-instruct-vision-f32.gguf is broken
https://huggingface.co/samgreen/Qwen2.5-VL-72B-Instruct-GGUF/blob/main/qwen2.5-vl-72b-instruct-vision-f32.gguf segfaults, while https://huggingface.co/ggml-org/Qwen2.5-VL-72B-Instruct-GGUF/blob/main/mmproj-Qwen2.5-VL-72B-Instruct-f16.gguf not
Your file is same as https://huggingface.co/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/blob/main/Qwen2.5-VL-72B-Instruct-mmproj-f32.gguf
I believe something was changed in the implementation last minute before the implementation was merged into llama.cpp main, so it's likely these checkpoints (or specifically the vision models) no longer work. I'll probably take these down since others have provided up to date conversions.
yes, i think so too. I am uploading actual gguf now. Multimodal projector f32 will be available in several mins here https://huggingface.co/GeorgyGUF/Qwen2.5-VL-72B-Instruct-bf16.gguf
p.s. i am using llama.cpp b5277