wav2vec2-arabic-colab-f
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the audiofolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.8010
- Wer: 32.0897
- Cer: 9.9979
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 3000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
2.0756 | 13.1611 | 500 | 0.5087 | 41.6867 | 12.5220 |
0.3284 | 26.3221 | 1000 | 0.6221 | 36.1204 | 11.0568 |
0.1148 | 39.4832 | 1500 | 0.6592 | 34.9208 | 10.7164 |
0.0781 | 52.6443 | 2000 | 0.7070 | 33.2774 | 10.2609 |
0.0589 | 65.8054 | 2500 | 0.7958 | 32.5336 | 10.0942 |
0.0502 | 78.9664 | 3000 | 0.8010 | 32.0897 | 9.9979 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for itskavya/wav2vec2-arabic-colab-f
Base model
facebook/wav2vec2-xls-r-1b