https://huggingface.co/medicalai/MedFound-Llama3-8B-finetuned
#558
by
eleius
- opened
I kindly request quants for this model. Thank you
Hi! It's queued and should be done within a few hours, assuming no problems occur. You can check the progress at http://hf.tst.eu/status.html
mradermacher
changed discussion status to
closed
It says "error/1 bpe-pt missing (0f840d0b…)", not sure what that means?
Oh, sorry, I forgot to report back, good that you prompted me - it means that the pretokenizer is not supported by llama.cpp. Every pretokenizer must be separately implemented (e.g. llama3). If the model is based on llama3-8b and the tokenizer is not supposed to have changed, then something went wrong when generating the model.
Ah ok, thanks anyway