HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-16_32
This model is a fine-tuned version of Bllossom/llama-3.2-Korean-Bllossom-AICA-5B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.6917
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1570
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
12.7741 | 0.3182 | 50 | 12.6253 |
10.5813 | 0.6364 | 100 | 10.2784 |
8.1914 | 0.9547 | 150 | 8.0146 |
6.8624 | 1.2729 | 200 | 6.7432 |
6.1578 | 1.5911 | 250 | 6.1182 |
5.9106 | 1.9093 | 300 | 5.8955 |
5.8115 | 2.2275 | 350 | 5.8042 |
5.7592 | 2.5457 | 400 | 5.7586 |
5.7362 | 2.8640 | 450 | 5.7360 |
5.7223 | 3.1822 | 500 | 5.7236 |
5.7137 | 3.5004 | 550 | 5.7152 |
5.7114 | 3.8186 | 600 | 5.7093 |
5.7006 | 4.1368 | 650 | 5.7056 |
5.703 | 4.4551 | 700 | 5.7025 |
5.7 | 4.7733 | 750 | 5.7003 |
5.6968 | 5.0915 | 800 | 5.6984 |
5.6966 | 5.4097 | 850 | 5.6970 |
5.6944 | 5.7279 | 900 | 5.6958 |
5.6934 | 6.0461 | 950 | 5.6948 |
5.6916 | 6.3644 | 1000 | 5.6941 |
5.6904 | 6.6826 | 1050 | 5.6936 |
5.6907 | 7.0008 | 1100 | 5.6931 |
5.6916 | 7.3190 | 1150 | 5.6928 |
5.6906 | 7.6372 | 1200 | 5.6924 |
5.6914 | 7.9554 | 1250 | 5.6922 |
5.6923 | 8.2737 | 1300 | 5.6920 |
5.6875 | 8.5919 | 1350 | 5.6918 |
5.6884 | 8.9101 | 1400 | 5.6918 |
5.6913 | 9.2283 | 1450 | 5.6917 |
5.6916 | 9.5465 | 1500 | 5.6917 |
5.6884 | 9.8648 | 1550 | 5.6917 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.2
- Pytorch 2.0.1+cu118
- Datasets 3.0.0
- Tokenizers 0.20.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TARARARAK/HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-16_32
Base model
Bllossom/llama-3.2-Korean-Bllossom-AICA-5B