rbelanec's picture
End of training
55380b9 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - p-tuning
  - generated_from_trainer
model-index:
  - name: train_cola_123_1757596073
    results: []

train_cola_123_1757596073

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cola dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2690
  • Num Input Tokens Seen: 3669168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2459 0.5 962 0.2198 184192
0.1912 1.0 1924 0.3721 367320
0.3065 1.5 2886 0.1982 550840
0.1991 2.0 3848 0.1548 734600
0.1468 2.5 4810 0.1629 918600
0.2305 3.0 5772 0.1594 1101216
0.1234 3.5 6734 0.1433 1284288
0.1783 4.0 7696 0.1576 1468552
0.0612 4.5 8658 0.1543 1651528
0.0945 5.0 9620 0.1614 1834816
0.2202 5.5 10582 0.1649 2018016
0.0456 6.0 11544 0.1480 2201584
0.2812 6.5 12506 0.1692 2385200
0.03 7.0 13468 0.1700 2568288
0.2057 7.5 14430 0.1858 2751584
0.0633 8.0 15392 0.1887 2935056
0.0306 8.5 16354 0.1888 3118000
0.0089 9.0 17316 0.2036 3301760
0.0546 9.5 18278 0.2047 3485344
0.1406 10.0 19240 0.2037 3669168

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1