Question about fine-tuning
Greetings,
I'm looking to fine-tune the GPT OSS 20B to be fluent in Serbian.
Since I'm a beginner, I was wondering, how long did it take to fine-tune the model on 4xH100 with the dataset of this size and can you share your training configuration by any chance?
Thanks.
It took about 12 hours or so but had few crashes here and there so the overall time it took for me in compute hours was about 28 or so.
Learning Rate: 2e-4
Batch Size: 4 per device (16 total with 4 GPUs)
Gradient Accumulation: 4 steps
Epochs: 4
Max Sequence Length: 2048 tokens
Warmup Ratio: 3%
LR Scheduler: Cosine with minimum LR (10% of peak)
Gradient Checkpointing: Enabled
my suggestion would be to use https://github.com/axolotl-ai-cloud/axolotl or https://github.com/hiyouga/LLaMA-Factory - helps you speed up your training process if you are getting started as beginner.
Thank you so much!