gguf version please?

#1
by Narutoouz - opened

Greate model for edge Applications!

Liquid AI org

Thanks for your interest. It's coming!

also interested in the gguf files.
I tried to convert with liquid's fork of llama.cpp (from https://github.com/Liquid4All/liquid_llama.cpp):
python liquid_llama.cpp/convert_hf_to_gguf.py LFM2-350M --outfile LFM2-350M.gguf

but getting this error:
INFO:hf-to-gguf:Loading model: LFM2-350M
INFO:hf-to-gguf:Model architecture: LFM2ForCausalLM
ERROR:hf-to-gguf:Model LFM2ForCausalLM is not supported

Thanks for your patience.
GGUFs have been added to the collection https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38.

I tried to convert with liquid's fork of llama.cpp (from https://github.com/Liquid4All/liquid_llama.cpp):
python liquid_llama.cpp/convert_hf_to_gguf.py LFM2-350M --outfile LFM2-350M.gguf
but getting this error:

@katrintomanek , I assume you used master branch in the repository, the actual implementation was in lfm2-upstream branch.
PR was merged into llama.cpp, feel free to use upstream.

tarek-liquid changed discussion status to closed

Sign up or log in to comment