Finetuning script using HuggingFace

#30
by 2U1 - opened

https://github.com/2U1/Gemma3-Finetune

I made a code for who wants to use the huggingface version to finetune, and having difficult using some other frameworks like me.

This code only uses huggingface for fine-tuning the 4B, 12B and 27B.

Also, you can set different learning_rate for vision_tower and language_model. ( Also for the projector)

Feedback and issues are welcome!

Can you provide the notebooks for finetuning for LoRA and QLoRA?

@Kaith-jeet123 I'll try

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment