HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-64_16

This model is a fine-tuned version of Bllossom/llama-3.2-Korean-Bllossom-AICA-5B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.7052

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 1570
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
12.8912 0.3182 50 12.8247
11.6977 0.6364 100 11.3812
9.1603 0.9547 150 8.9505
7.6735 1.2729 200 7.5549
6.7945 1.5911 250 6.7118
6.2617 1.9093 300 6.2235
6.0081 2.2275 350 5.9894
5.8829 2.5457 400 5.8750
5.8219 2.8640 450 5.8154
5.7859 3.1822 500 5.7831
5.7645 3.5004 550 5.7624
5.7485 3.8186 600 5.7478
5.7375 4.1368 650 5.7377
5.7339 4.4551 700 5.7301
5.7241 4.7733 750 5.7246
5.7212 5.0915 800 5.7204
5.7178 5.4097 850 5.7170
5.7158 5.7279 900 5.7145
5.7113 6.0461 950 5.7124
5.711 6.3644 1000 5.7107
5.7062 6.6826 1050 5.7093
5.7075 7.0008 1100 5.7082
5.7079 7.3190 1150 5.7074
5.7104 7.6372 1200 5.7067
5.7046 7.9554 1250 5.7063
5.7027 8.2737 1300 5.7058
5.7049 8.5919 1350 5.7056
5.7032 8.9101 1400 5.7053
5.7048 9.2283 1450 5.7053
5.7057 9.5465 1500 5.7052
5.7035 9.8648 1550 5.7052

Framework versions

  • PEFT 0.12.0
  • Transformers 4.46.2
  • Pytorch 2.0.1+cu118
  • Datasets 3.0.0
  • Tokenizers 0.20.1
Downloads last month
31
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for TARARARAK/HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-64_16

Adapter
(3)
this model