PEFT finetuning support
#14 opened 3 days ago
by
NePe
Model output is !!!!!!!!, I am using VLLM
#13 opened 8 days ago
by
David3698
Which custom code this model wants to run? Could you please explain me? I was using Llama.cpp.
1
#12 opened 16 days ago
by
JLouisBiz

why the c-eval result is 76.8 for base model but only 38.9 for instruct model?
1
#8 opened about 2 months ago
by
xianf
eos_token_id is list not int
#7 opened 2 months ago
by
ningpengtao
Awesome!
4
2
#6 opened 2 months ago
by
SicariusSicariiStuff

Run this with chatllm.cpp
3
#5 opened 2 months ago
by
J22