HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-32_128_2
This model is a fine-tuned version of Bllossom/llama-3.2-Korean-Bllossom-AICA-5B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.6778
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 628
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
8.0386 | 0.1973 | 31 | 6.9446 |
5.7807 | 0.3946 | 62 | 5.7341 |
5.6962 | 0.5919 | 93 | 5.6926 |
5.6881 | 0.7892 | 124 | 5.6864 |
5.6812 | 0.9865 | 155 | 5.6828 |
5.678 | 1.1838 | 186 | 5.6816 |
5.6779 | 1.3811 | 217 | 5.6804 |
5.6752 | 1.5784 | 248 | 5.6796 |
5.6755 | 1.7757 | 279 | 5.6794 |
5.6795 | 1.9730 | 310 | 5.6787 |
5.6751 | 2.1702 | 341 | 5.6784 |
5.6748 | 2.3675 | 372 | 5.6783 |
5.672 | 2.5648 | 403 | 5.6780 |
5.6731 | 2.7621 | 434 | 5.6777 |
5.6714 | 2.9594 | 465 | 5.6776 |
5.6734 | 3.1567 | 496 | 5.6779 |
5.6707 | 3.3540 | 527 | 5.6778 |
5.6711 | 3.5513 | 558 | 5.6778 |
5.671 | 3.7486 | 589 | 5.6778 |
5.6719 | 3.9459 | 620 | 5.6778 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.2
- Pytorch 2.0.1+cu118
- Datasets 3.0.0
- Tokenizers 0.20.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TARARARAK/HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-32_128_2
Base model
Bllossom/llama-3.2-Korean-Bllossom-AICA-5B