metadata
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: instruction_tuned_model
results: []
instruction_tuned_model
This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3074
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4333 | 0.2712 | 20 | 0.3822 |
0.3086 | 0.5424 | 40 | 0.3394 |
0.2944 | 0.8136 | 60 | 0.3268 |
0.2803 | 1.0847 | 80 | 0.3200 |
0.2688 | 1.3559 | 100 | 0.3165 |
0.2675 | 1.6271 | 120 | 0.3125 |
0.2627 | 1.8983 | 140 | 0.3095 |
0.2529 | 2.1695 | 160 | 0.3089 |
0.253 | 2.4407 | 180 | 0.3079 |
0.2513 | 2.7119 | 200 | 0.3074 |
Framework versions
- PEFT 0.13.1
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1