Finetuning script using HuggingFace
#30
by
2U1
- opened
https://github.com/2U1/Gemma3-Finetune
I made a code for who wants to use the huggingface version to finetune, and having difficult using some other frameworks like me.
This code only uses huggingface for fine-tuning the 4B, 12B and 27B.
Also, you can set different learning_rate for vision_tower and language_model. ( Also for the projector)
Feedback and issues are welcome!
Can you provide the notebooks for finetuning for LoRA and QLoRA?