Training details
#1
by
bezdarnost
- opened
Thank you for your great work!
I'm interested in the fine-tuning details:
- Did you use LoRA for fine-tuning, or was it a full fine-tune?
- What was the batch size, and what were the other hyperparameters (such as LoRA rank, etc.)?