Edit model card

fine-w2v2base-bs16-ep100-lr2e-05-linguistic-rmsnorm-focal_ctc_a0.25_g1.0-0.05_10_0.004_40

This model is a fine-tuned version of nguyenvulebinh/wav2vec2-base-vietnamese-250h on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0608
  • Wer: 0.1012

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Wer
533.9511 0.94 50 256.7028 15.6243
337.9142 1.89 100 82.7848 0.9952
61.5907 2.83 150 23.0722 1.0
28.765 3.77 200 21.1465 1.0
27.4031 4.72 250 20.4765 1.0
26.4393 5.66 300 19.7334 1.0
25.6404 6.6 350 19.2024 1.0
24.6513 7.55 400 18.8785 1.0
24.1773 8.49 450 18.6720 1.0
24.4131 9.43 500 18.5339 1.0
24.0393 10.38 550 18.2257 1.0
23.0434 11.32 600 14.5791 0.7926
15.8664 12.26 650 7.8034 0.4329
9.3173 13.21 700 4.4018 0.2799
6.3563 14.15 750 3.2101 0.2358
5.0252 15.09 800 2.6043 0.2082
4.2585 16.04 850 2.2239 0.1825
3.7273 16.98 900 1.9758 0.1832
3.3365 17.92 950 1.7976 0.1559
3.1388 18.87 1000 1.6666 0.1595
2.8685 19.81 1050 1.5991 0.1669
2.6594 20.75 1100 1.4857 0.1446
2.6179 21.7 1150 1.4291 0.1417
2.5211 22.64 1200 1.4383 0.1519
2.3276 23.58 1250 1.3807 0.1406
2.1864 24.53 1300 1.3551 0.1362
2.2149 25.47 1350 1.2799 0.1335
2.0461 26.42 1400 1.2590 0.1269
2.0038 27.36 1450 1.2624 0.1343
1.9107 28.3 1500 1.2001 0.1253
1.9049 29.25 1550 1.1936 0.1251
1.8057 30.19 1600 1.1986 0.1314
1.7086 31.13 1650 1.1642 0.1225
1.7464 32.08 1700 1.1177 0.1195
1.6634 33.02 1750 1.1247 0.1140
1.6189 33.96 1800 1.1151 0.1114
1.5062 34.91 1850 1.1218 0.1118
1.5323 35.85 1900 1.0949 0.1062
1.5779 36.79 1950 1.0786 0.1104
1.4826 37.74 2000 1.0774 0.1175
1.5034 38.68 2050 1.0891 0.1187
1.4051 39.62 2100 1.0873 0.1229
1.4084 40.57 2150 1.0893 0.1147
1.3231 41.51 2200 1.0818 0.1086
1.3182 42.45 2250 1.0795 0.1145
1.2747 43.4 2300 1.0833 0.1109
1.2657 44.34 2350 1.0797 0.1095
1.2867 45.28 2400 1.0753 0.1053
1.2034 46.23 2450 1.0782 0.1070
1.1649 47.17 2500 1.0685 0.1034
1.1314 48.11 2550 1.0622 0.0979
1.158 49.06 2600 1.0880 0.1032
1.0918 50.0 2650 1.0677 0.1022
1.0786 50.94 2700 1.0708 0.0980
1.1275 51.89 2750 1.0576 0.0989
0.9832 52.83 2800 1.0594 0.1048
1.0832 53.77 2850 1.0528 0.1026
1.0483 54.72 2900 1.0524 0.1072
0.9776 55.66 2950 1.0491 0.1049
0.972 56.6 3000 1.0471 0.1064
1.0257 57.55 3050 1.0680 0.1104
0.9965 58.49 3100 1.0723 0.1157
0.961 59.43 3150 1.0600 0.1040
0.9893 60.38 3200 1.0720 0.1123
0.8888 61.32 3250 1.0598 0.1060
0.9583 62.26 3300 1.0703 0.1057
0.8763 63.21 3350 1.0754 0.1114
0.9151 64.15 3400 1.0769 0.1045
0.8981 65.09 3450 1.0714 0.1014
0.8937 66.04 3500 1.0753 0.1049
0.8897 66.98 3550 1.0775 0.1071
0.8903 67.92 3600 1.0775 0.1039
0.8655 68.87 3650 1.0844 0.1026
0.8845 69.81 3700 1.0831 0.1020
0.854 70.75 3750 1.0877 0.1022
0.8797 71.7 3800 1.0739 0.1007
0.8357 72.64 3850 1.0680 0.1018
0.822 73.58 3900 1.0628 0.1026
0.7696 74.53 3950 1.0585 0.0995
0.8735 75.47 4000 1.0596 0.0994
0.8212 76.42 4050 1.0574 0.0978
0.8341 77.36 4100 1.0632 0.0998
0.8275 78.3 4150 1.0620 0.1019
0.7727 79.25 4200 1.0592 0.1002
0.8182 80.19 4250 1.0523 0.0969
0.813 81.13 4300 1.0672 0.1039
0.7961 82.08 4350 1.0689 0.1014
0.7956 83.02 4400 1.0666 0.0999
0.7853 83.96 4450 1.0648 0.1001
0.8167 84.91 4500 1.0634 0.0986
0.779 85.85 4550 1.0633 0.1012
0.8055 86.79 4600 1.0583 0.1004
0.7847 87.74 4650 1.0618 0.1016
0.7961 88.68 4700 1.0615 0.1016
0.7911 89.62 4750 1.0586 0.1017
0.7098 90.57 4800 1.0613 0.1031
0.8353 91.51 4850 1.0596 0.1015
0.7127 92.45 4900 1.0582 0.1012
0.824 93.4 4950 1.0597 0.1009
0.762 94.34 5000 1.0587 0.1014
0.7424 95.28 5050 1.0596 0.1013
0.7701 96.23 5100 1.0605 0.1011
0.7544 97.17 5150 1.0609 0.1014
0.7844 98.11 5200 1.0606 0.1014
0.7769 99.06 5250 1.0607 0.1012
0.7914 100.0 5300 1.0608 0.1012

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for tuanio/fine-w2v2base-bs16-ep100-lr2e-05-linguistic-rmsnorm-focal_ctc_a0.25_g1.0-0.05_10_0.004_40

Finetuned
(56)
this model