MentaLLaMA-chat-7B-PsyCourse-fold10
This model is a fine-tuned version of klyang/MentaLLaMA-chat-7B-hf on the course-train-fold10 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0297
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8796 | 0.0770 | 50 | 0.6149 |
0.1225 | 0.1539 | 100 | 0.1079 |
0.0896 | 0.2309 | 150 | 0.0615 |
0.0656 | 0.3078 | 200 | 0.0519 |
0.0599 | 0.3848 | 250 | 0.0487 |
0.0559 | 0.4618 | 300 | 0.0453 |
0.047 | 0.5387 | 350 | 0.0427 |
0.0386 | 0.6157 | 400 | 0.0398 |
0.0402 | 0.6926 | 450 | 0.0429 |
0.0525 | 0.7696 | 500 | 0.0384 |
0.0374 | 0.8466 | 550 | 0.0381 |
0.0479 | 0.9235 | 600 | 0.0336 |
0.0464 | 1.0005 | 650 | 0.0337 |
0.0291 | 1.0774 | 700 | 0.0351 |
0.0266 | 1.1544 | 750 | 0.0336 |
0.0268 | 1.2314 | 800 | 0.0323 |
0.0257 | 1.3083 | 850 | 0.0325 |
0.0311 | 1.3853 | 900 | 0.0323 |
0.0345 | 1.4622 | 950 | 0.0317 |
0.0287 | 1.5392 | 1000 | 0.0345 |
0.0333 | 1.6162 | 1050 | 0.0313 |
0.0278 | 1.6931 | 1100 | 0.0314 |
0.0239 | 1.7701 | 1150 | 0.0306 |
0.0346 | 1.8470 | 1200 | 0.0312 |
0.0289 | 1.9240 | 1250 | 0.0301 |
0.0308 | 2.0010 | 1300 | 0.0319 |
0.0176 | 2.0779 | 1350 | 0.0314 |
0.0192 | 2.1549 | 1400 | 0.0312 |
0.0157 | 2.2318 | 1450 | 0.0315 |
0.0227 | 2.3088 | 1500 | 0.0301 |
0.0211 | 2.3858 | 1550 | 0.0305 |
0.0192 | 2.4627 | 1600 | 0.0327 |
0.0215 | 2.5397 | 1650 | 0.0311 |
0.0235 | 2.6166 | 1700 | 0.0297 |
0.0149 | 2.6936 | 1750 | 0.0312 |
0.0206 | 2.7706 | 1800 | 0.0299 |
0.0154 | 2.8475 | 1850 | 0.0318 |
0.0188 | 2.9245 | 1900 | 0.0301 |
0.0198 | 3.0014 | 1950 | 0.0300 |
0.0117 | 3.0784 | 2000 | 0.0322 |
0.0103 | 3.1554 | 2050 | 0.0334 |
0.0158 | 3.2323 | 2100 | 0.0343 |
0.0116 | 3.3093 | 2150 | 0.0330 |
0.0117 | 3.3862 | 2200 | 0.0342 |
0.01 | 3.4632 | 2250 | 0.0345 |
0.0113 | 3.5402 | 2300 | 0.0345 |
0.0111 | 3.6171 | 2350 | 0.0342 |
0.0105 | 3.6941 | 2400 | 0.0351 |
0.0094 | 3.7710 | 2450 | 0.0365 |
0.0144 | 3.8480 | 2500 | 0.0337 |
0.0071 | 3.9250 | 2550 | 0.0341 |
0.0081 | 4.0019 | 2600 | 0.0339 |
0.0051 | 4.0789 | 2650 | 0.0355 |
0.0081 | 4.1558 | 2700 | 0.0364 |
0.0068 | 4.2328 | 2750 | 0.0382 |
0.0055 | 4.3098 | 2800 | 0.0389 |
0.0045 | 4.3867 | 2850 | 0.0386 |
0.0033 | 4.4637 | 2900 | 0.0386 |
0.0031 | 4.5406 | 2950 | 0.0391 |
0.0055 | 4.6176 | 3000 | 0.0393 |
0.0081 | 4.6946 | 3050 | 0.0395 |
0.0034 | 4.7715 | 3100 | 0.0396 |
0.0073 | 4.8485 | 3150 | 0.0396 |
0.0043 | 4.9254 | 3200 | 0.0396 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 60
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for chchen/MentaLLaMA-chat-7B-PsyCourse-fold10
Base model
klyang/MentaLLaMA-chat-7B-hf