id_ravdess_mel_spec_Vit_vit-tiny-patch16-224_2

This model is a fine-tuned version of WinKawaks/vit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8514
  • Accuracy: 0.7870

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 43
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 8 3.4190 0.0324
3.577 2.0 16 3.1245 0.0972
3.124 3.0 24 2.9168 0.1806
2.7828 4.0 32 2.5031 0.2454
2.1565 5.0 40 2.0137 0.3796
2.1565 6.0 48 1.6643 0.5463
1.4039 7.0 56 1.4283 0.5880
0.8467 8.0 64 1.2302 0.6620
0.471 9.0 72 1.1420 0.6574
0.245 10.0 80 1.0551 0.7130
0.245 11.0 88 1.0008 0.7361
0.1111 12.0 96 0.9660 0.75
0.0515 13.0 104 0.9158 0.7407
0.0227 14.0 112 0.9132 0.7407
0.0106 15.0 120 0.8355 0.7685
0.0106 16.0 128 0.8486 0.7639
0.0042 17.0 136 0.8263 0.7778
0.0021 18.0 144 0.8304 0.7731
0.0013 19.0 152 0.8260 0.7824
0.0009 20.0 160 0.8407 0.7731
0.0009 21.0 168 0.8337 0.7824
0.0008 22.0 176 0.8311 0.7824
0.0006 23.0 184 0.8370 0.7778
0.0006 24.0 192 0.8371 0.7778
0.0005 25.0 200 0.8373 0.7870
0.0005 26.0 208 0.8399 0.7870
0.0005 27.0 216 0.8394 0.7870
0.0005 28.0 224 0.8412 0.7824
0.0004 29.0 232 0.8416 0.7870
0.0004 30.0 240 0.8431 0.7870
0.0004 31.0 248 0.8450 0.7824
0.0004 32.0 256 0.8430 0.7870
0.0004 33.0 264 0.8458 0.7824
0.0003 34.0 272 0.8466 0.7870
0.0003 35.0 280 0.8454 0.7870
0.0003 36.0 288 0.8468 0.7824
0.0003 37.0 296 0.8484 0.7870
0.0003 38.0 304 0.8484 0.7870
0.0003 39.0 312 0.8492 0.7824
0.0003 40.0 320 0.8498 0.7870
0.0003 41.0 328 0.8492 0.7870
0.0003 42.0 336 0.8491 0.7870
0.0003 43.0 344 0.8505 0.7870
0.0003 44.0 352 0.8509 0.7870
0.0003 45.0 360 0.8505 0.7870
0.0003 46.0 368 0.8509 0.7870
0.0003 47.0 376 0.8510 0.7870
0.0003 48.0 384 0.8511 0.7870
0.0003 49.0 392 0.8514 0.7870
0.0003 50.0 400 0.8514 0.7870

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
1
Safetensors
Model size
5.53M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ricardoSLabs/id_ravdess_mel_spec_Vit_vit-tiny-patch16-224_2

Finetuned
(48)
this model

Evaluation results