mms-1b-all-bemgen-balanced-42

This model is a fine-tuned version of facebook/mms-1b-all on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2097
  • Wer: 0.3827

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
7.8516 0.5076 100 6.0399 1.0
4.7682 1.0152 200 5.3047 1.0
4.4055 1.5228 300 4.9190 0.9999
2.5358 2.0305 400 0.2801 0.4977
0.4571 2.5381 500 0.2393 0.4290
0.4204 3.0457 600 0.2353 0.4153
0.4038 3.5533 700 0.2277 0.4140
0.3795 4.0609 800 0.2256 0.4061
0.3847 4.5685 900 0.2232 0.4007
0.3712 5.0761 1000 0.2225 0.3944
0.3604 5.5838 1100 0.2187 0.3980
0.3578 6.0914 1200 0.2218 0.4031
0.3521 6.5990 1300 0.2182 0.3920
0.3536 7.1066 1400 0.2170 0.3859
0.3397 7.6142 1500 0.2164 0.3894
0.3375 8.1218 1600 0.2159 0.3849
0.3326 8.6294 1700 0.2164 0.4045
0.3435 9.1371 1800 0.2144 0.3839
0.3313 9.6447 1900 0.2146 0.3850
0.3197 10.1523 2000 0.2133 0.3862
0.317 10.6599 2100 0.2127 0.3835
0.3294 11.1675 2200 0.2144 0.3828
0.3154 11.6751 2300 0.2109 0.3956
0.3227 12.1827 2400 0.2097 0.3824
0.3222 12.6904 2500 0.2124 0.3853
0.3075 13.1980 2600 0.2116 0.3806
0.3048 13.7056 2700 0.2102 0.3793
0.3041 14.2132 2800 0.2133 0.3882

Framework versions

  • Transformers 4.53.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.0
Downloads last month
56
Safetensors
Model size
965M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for csikasote/mms-1b-all-bemgen-balanced-42

Finetuned
(360)
this model