arielcerdap's picture
End of training
9b854b0 verified
metadata
library_name: transformers
language:
  - en
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
  - generated_from_trainer
datasets:
  - arielcerdap/TimeStamped
metrics:
  - wer
model-index:
  - name: Wav2Vec2 TimeStamped Stutter - Ariel Cerda
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: TimeStamped
          type: arielcerdap/TimeStamped
          args: 'config: en, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 0.9991797676008203

Wav2Vec2 TimeStamped Stutter - Ariel Cerda

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the TimeStamped dataset. It achieves the following results on the evaluation set:

  • Loss: nan
  • Wer: 0.9992

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
11.5515 1.1696 100 11.9112 1.0001
13.4395 2.3392 200 11.9112 1.0001
11.9169 3.5088 300 11.9112 1.0001
11.1801 4.6784 400 11.9112 1.0001
15.3393 5.8480 500 11.9112 1.0001
11.8177 7.0175 600 11.9112 1.0001
10.7941 8.1871 700 11.9112 1.0001
13.8075 9.3567 800 11.9112 1.0001
11.5275 10.5263 900 11.9112 1.0001
10.1228 11.6959 1000 11.9112 1.0001
13.7071 12.8655 1100 11.9112 1.0001
11.6933 14.0351 1200 11.9112 1.0001
10.885 15.2047 1300 11.9112 1.0001
14.2349 16.3743 1400 11.9112 1.0001
12.8886 17.5439 1500 11.9112 1.0001
0.0 18.7135 1600 nan 0.9992
0.0 19.8830 1700 nan 0.9992
0.0 21.0526 1800 nan 0.9992
0.0 22.2222 1900 nan 0.9992
0.0 23.3918 2000 nan 0.9992
0.0 24.5614 2100 nan 0.9992
0.0 25.7310 2200 nan 0.9992
0.0 26.9006 2300 nan 0.9992
0.0 28.0702 2400 nan 0.9992
0.0 29.2398 2500 nan 0.9992

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.19.1