Issue converting PEFT LoRA fine tuned model to GGUF

#124
by AdnanRiaz107 - opened

I fine tuned Phi-3-128K instruct model using PEFT Lora and hosted the model to hugging face. It works fine but I am unable to convert it into GGUF format. While I am successfully converted the full fine tuned model. It says that "No such file or directory: 'CodePhi-3-mini-0.1Klora/config.json'\n'" while it a peft lora fine tuned model and I have hosted my fine tuned model on hugging face. The repository contains following files :
1: adapter_config.json
2: adapter_model.safetensors
3: special_tokens_map.json
4:tokenizer.json
5:tokenizer.model
6:tokenizer_config.json
7:training_args.bin
8: README.md
When I use "GGUF my repo" space to convert my peft lora fine tuned model i get the following error.

Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: CodePhi-3-mini-0.1Klora\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4149, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4117, in main\n hparams = Model.load_hparams(dir_model)\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 423, in load_hparams\n with open(dir_model / "config.json", "r", encoding="utf-8") as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'CodePhi-3-mini-0.1Klora/config.json'\n'

AdnanRiaz107 changed discussion title from Issue converting PEFT LoRA fine tuned model to Issue converting PEFT LoRA fine tuned model to GGUF

You need to first merge the weights with the base model

ggml.ai org

Hi, currently gguf-my-repo does not support LoRA adapter.

How can however use theconvert_lora_to_gguf.py script locally. We will update this repo to support that soon.

Sign up or log in to comment