Edit model card

wav2vec2-xlsr-1b-mecita-portuguese-all-text-a_coisa-os_morcegos

This model is a fine-tuned version of jonatasgrosman/wav2vec2-xls-r-1b-portuguese on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1774
  • Wer: 0.0844
  • Cer: 0.0266

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
25.5905 1.0 79 0.4495 0.2580 0.0734
3.1482 2.0 158 0.2479 0.1204 0.0380
0.4247 3.0 237 0.2347 0.1025 0.0345
0.3136 4.0 316 0.2044 0.1017 0.0322
0.3136 5.0 395 0.1906 0.0930 0.0296
0.2985 6.0 474 0.2050 0.0963 0.0311
0.2413 7.0 553 0.2025 0.0971 0.0309
0.2267 8.0 632 0.2006 0.0885 0.0291
0.224 9.0 711 0.1991 0.0917 0.0291
0.224 10.0 790 0.1881 0.0885 0.0281
0.1864 11.0 869 0.1841 0.0893 0.0278
0.1951 12.0 948 0.1809 0.0895 0.0282
0.1794 13.0 1027 0.1923 0.0833 0.0280
0.1621 14.0 1106 0.1949 0.0857 0.0277
0.1621 15.0 1185 0.1929 0.0817 0.0266
0.1695 16.0 1264 0.1907 0.0839 0.0270
0.1528 17.0 1343 0.1839 0.0906 0.0286
0.1592 18.0 1422 0.1866 0.0903 0.0281
0.1519 19.0 1501 0.2031 0.0857 0.0275
0.1519 20.0 1580 0.1948 0.0860 0.0278
0.1257 21.0 1659 0.1850 0.0860 0.0262
0.1288 22.0 1738 0.1774 0.0844 0.0266
0.115 23.0 1817 0.1960 0.0844 0.0265
0.115 24.0 1896 0.1832 0.0825 0.0258
0.1223 25.0 1975 0.1920 0.0828 0.0261
0.1175 26.0 2054 0.1951 0.0803 0.0260
0.1051 27.0 2133 0.1996 0.0825 0.0266
0.1033 28.0 2212 0.2152 0.0847 0.0274
0.1033 29.0 2291 0.2082 0.0879 0.0277
0.0961 30.0 2370 0.2153 0.0855 0.0274
0.1003 31.0 2449 0.2044 0.0903 0.0288
0.1129 32.0 2528 0.2050 0.0855 0.0268
0.0939 33.0 2607 0.2028 0.0860 0.0271
0.0939 34.0 2686 0.2031 0.0847 0.0274
0.0846 35.0 2765 0.2046 0.0822 0.0269
0.083 36.0 2844 0.2094 0.0825 0.0265
0.0844 37.0 2923 0.2176 0.0820 0.0268
0.0829 38.0 3002 0.2082 0.0817 0.0267
0.0829 39.0 3081 0.2200 0.0893 0.0286
0.103 40.0 3160 0.2102 0.0841 0.0276
0.0728 41.0 3239 0.2143 0.0817 0.0271
0.079 42.0 3318 0.2131 0.0825 0.0265

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.17.0
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.