llama3-70B-lora-pretrain_v2 / train_results.json
ytcheng's picture
End of training
ac59fb7 verified
raw
history blame contribute delete
222 Bytes
{
"epoch": 2.998592210229939,
"total_flos": 1.0917373877893988e+19,
"train_loss": 2.069366264641751,
"train_runtime": 97122.8592,
"train_samples_per_second": 0.263,
"train_steps_per_second": 0.033
}