Script to convert to GGUF

#1
by immortal886 - opened

Thank you for the great work!
I've just fine-tuned the Qwen2.5-Omni-7B model using LoRA, and then merged the adapters into the base model. Now, I’d like to convert this merged model to the GGUF format to run it with llama.cpp.
Could you please guide me on how to do this?

Thanks in advance!

Unsloth AI org

Thank you for the great work!
I've just fine-tuned the Qwen2.5-Omni-7B model using LoRA, and then merged the adapters into the base model. Now, I’d like to convert this merged model to the GGUF format to run it with llama.cpp.
Could you please guide me on how to do this?

Thanks in advance!

You need to read llama cpp guide and PR for the model!

Sign up or log in to comment