Training details

#1
by bezdarnost - opened

Thank you for your great work!

I'm interested in the fine-tuning details:

  1. Did you use LoRA for fine-tuning, or was it a full fine-tune?
  2. What was the batch size, and what were the other hyperparameters (such as LoRA rank, etc.)?

Sign up or log in to comment