HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-32_8
This model is a fine-tuned version of Bllossom/llama-3.2-Korean-Bllossom-AICA-5B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.6957
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2189
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
11.0657 | 0.3197 | 50 | 10.8531 |
8.8231 | 0.6395 | 100 | 8.5461 |
6.9266 | 0.9592 | 150 | 6.7521 |
6.0575 | 1.2790 | 200 | 6.0233 |
5.8686 | 1.5987 | 250 | 5.8543 |
5.7932 | 1.9185 | 300 | 5.7851 |
5.7528 | 2.2382 | 350 | 5.7494 |
5.7332 | 2.5580 | 400 | 5.7294 |
5.7202 | 2.8777 | 450 | 5.7179 |
5.713 | 3.1974 | 500 | 5.7111 |
5.7056 | 3.5172 | 550 | 5.7073 |
5.7074 | 3.8369 | 600 | 5.7051 |
5.7057 | 4.1567 | 650 | 5.7036 |
5.7044 | 4.4764 | 700 | 5.7025 |
5.7004 | 4.7962 | 750 | 5.7016 |
5.7014 | 5.1159 | 800 | 5.7007 |
5.7029 | 5.4357 | 850 | 5.7001 |
5.6985 | 5.7554 | 900 | 5.6995 |
5.7009 | 6.0751 | 950 | 5.6991 |
5.7 | 6.3949 | 1000 | 5.6986 |
5.6982 | 6.7146 | 1050 | 5.6983 |
5.6978 | 7.0344 | 1100 | 5.6979 |
5.6967 | 7.3541 | 1150 | 5.6976 |
5.6979 | 7.6739 | 1200 | 5.6974 |
5.6983 | 7.9936 | 1250 | 5.6971 |
5.6975 | 8.3133 | 1300 | 5.6969 |
5.6984 | 8.6331 | 1350 | 5.6967 |
5.6992 | 8.9528 | 1400 | 5.6965 |
5.6965 | 9.2726 | 1450 | 5.6964 |
5.6984 | 9.5923 | 1500 | 5.6963 |
5.6969 | 9.9121 | 1550 | 5.6962 |
5.6968 | 10.2318 | 1600 | 5.6961 |
5.6983 | 10.5516 | 1650 | 5.6960 |
5.6997 | 10.8713 | 1700 | 5.6959 |
5.6959 | 11.1910 | 1750 | 5.6959 |
5.6951 | 11.5108 | 1800 | 5.6958 |
5.6968 | 11.8305 | 1850 | 5.6958 |
5.6959 | 12.1503 | 1900 | 5.6957 |
5.6948 | 12.4700 | 1950 | 5.6957 |
5.6955 | 12.7898 | 2000 | 5.6957 |
5.6962 | 13.1095 | 2050 | 5.6957 |
5.695 | 13.4293 | 2100 | 5.6957 |
5.6958 | 13.7490 | 2150 | 5.6957 |
Framework versions
- PEFT 0.14.1.dev0
- Transformers 4.45.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for TARARARAK/HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-32_8
Base model
Bllossom/llama-3.2-Korean-Bllossom-AICA-5B