|
wandb: Currently logged in as: priyanshi-pal (priyanshipal). Use `wandb login --relogin` to force relogin |
|
wandb: wandb version 0.17.7 is available! To upgrade, please run: |
|
wandb: $ pip install wandb --upgrade |
|
wandb: Tracking run with wandb version 0.17.6 |
|
wandb: Run data is saved locally in /scratch/elec/t405-puhe/p/palp3/MUCS/wandb/run-20240822_151726-alv0f5i7 |
|
wandb: Run `wandb offline` to turn off syncing. |
|
wandb: Syncing run eval_pd2000_s300_shuff100_hindi |
|
wandb: βοΈ View project at https://wandb.ai/priyanshipal/huggingface |
|
wandb: π View run at https://wandb.ai/priyanshipal/huggingface/runs/alv0f5i7 |
|
/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/training_args.py:1525: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of π€ Transformers. Use `eval_strategy` instead |
|
warnings.warn( |
|
/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead. |
|
warnings.warn( |
|
/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/feature_extraction_auto.py:329: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead. |
|
warnings.warn( |
|
/scratch/work/palp3/myenv/lib/python3.11/site-packages/accelerate/accelerator.py:488: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. |
|
self.scaler = torch.cuda.amp.GradScaler(**kwargs) |
|
max_steps is given, it will override any value given in num_train_epochs |
|
Wav2Vec2CTCTokenizer(name_or_path='', vocab_size=149, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '[UNK]', 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=True), added_tokens_decoder={ |
|
147: AddedToken("[UNK]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False), |
|
148: AddedToken("[PAD]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False), |
|
149: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), |
|
150: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), |
|
} |
|
CHECK MODEL PARAMS Wav2Vec2ForCTC( |
|
(wav2vec2): Wav2Vec2Model( |
|
(feature_extractor): Wav2Vec2FeatureEncoder( |
|
(conv_layers): ModuleList( |
|
(0): Wav2Vec2LayerNormConvLayer( |
|
(conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,)) |
|
(layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) |
|
(activation): GELUActivation() |
|
) |
|
(1-4): 4 x Wav2Vec2LayerNormConvLayer( |
|
(conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,)) |
|
(layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) |
|
(activation): GELUActivation() |
|
) |
|
(5-6): 2 x Wav2Vec2LayerNormConvLayer( |
|
(conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,)) |
|
(layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) |
|
(activation): GELUActivation() |
|
) |
|
) |
|
) |
|
(feature_projection): Wav2Vec2FeatureProjection( |
|
(layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) |
|
(projection): Linear(in_features=512, out_features=1024, bias=True) |
|
(dropout): Dropout(p=0.0, inplace=False) |
|
) |
|
(encoder): Wav2Vec2EncoderStableLayerNorm( |
|
(pos_conv_embed): Wav2Vec2PositionalConvEmbedding( |
|
(conv): ParametrizedConv1d( |
|
1024, 1024, kernel_size=(128,), stride=(1,), padding=(64,), groups=16 |
|
(parametrizations): ModuleDict( |
|
(weight): ParametrizationList( |
|
(0): _WeightNorm() |
|
) |
|
) |
|
) |
|
(padding): Wav2Vec2SamePadLayer() |
|
(activation): GELUActivation() |
|
) |
|
(layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) |
|
(dropout): Dropout(p=0.0, inplace=False) |
|
(layers): ModuleList( |
|
(0-23): 24 x Wav2Vec2EncoderLayerStableLayerNorm( |
|
(attention): Wav2Vec2SdpaAttention( |
|
(k_proj): Linear(in_features=1024, out_features=1024, bias=True) |
|
(v_proj): Linear(in_features=1024, out_features=1024, bias=True) |
|
(q_proj): Linear(in_features=1024, out_features=1024, bias=True) |
|
(out_proj): Linear(in_features=1024, out_features=1024, bias=True) |
|
) |
|
(dropout): Dropout(p=0.0, inplace=False) |
|
(layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) |
|
(feed_forward): Wav2Vec2FeedForward( |
|
(intermediate_dropout): Dropout(p=0.0, inplace=False) |
|
(intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True) |
|
(intermediate_act_fn): GELUActivation() |
|
(output_dense): Linear(in_features=4096, out_features=1024, bias=True) |
|
(output_dropout): Dropout(p=0.0, inplace=False) |
|
) |
|
(final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) |
|
) |
|
) |
|
) |
|
) |
|
(dropout): Dropout(p=0.0, inplace=False) |
|
(lm_head): Linear(in_features=1024, out_features=151, bias=True) |
|
) |
|
check the eval set length 572 |
|
08/22/2024 15:17:37 - INFO - __main__ - *** Evaluate *** |
|
/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py:157: UserWarning: `as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument `text` of the regular `__call__` method (either in the same call as your audio inputs, or in a separate call. |
|
warnings.warn( |
|
0%| | 0/36 [00:00<?, ?it/s]
6%|β | 2/36 [00:01<00:31, 1.07it/s]
8%|β | 3/36 [00:03<00:42, 1.28s/it]
11%|β | 4/36 [00:05<00:52, 1.65s/it]
14%|ββ | 5/36 [00:07<00:53, 1.72s/it]
17%|ββ | 6/36 [00:09<00:51, 1.72s/it]
19%|ββ | 7/36 [00:10<00:42, 1.48s/it]
22%|βββ | 8/36 [00:11<00:33, 1.20s/it]
25%|βββ | 9/36 [00:11<00:28, 1.06s/it]
28%|βββ | 10/36 [00:12<00:27, 1.04s/it]
31%|βββ | 11/36 [00:13<00:26, 1.06s/it]
33%|ββββ | 12/36 [00:14<00:25, 1.05s/it]
36%|ββββ | 13/36 [00:15<00:23, 1.00s/it]
39%|ββββ | 14/36 [00:16<00:19, 1.15it/s]
42%|βββββ | 15/36 [00:16<00:16, 1.28it/s]
44%|βββββ | 16/36 [00:17<00:14, 1.36it/s]
47%|βββββ | 17/36 [00:18<00:13, 1.41it/s]
50%|βββββ | 18/36 [00:19<00:13, 1.35it/s]
53%|ββββββ | 19/36 [00:19<00:13, 1.30it/s]
56%|ββββββ | 20/36 [00:20<00:11, 1.35it/s]
58%|ββββββ | 21/36 [00:21<00:10, 1.45it/s]
61%|ββββββ | 22/36 [00:21<00:09, 1.43it/s]
64%|βββββββ | 23/36 [00:22<00:09, 1.40it/s]
67%|βββββββ | 24/36 [00:23<00:08, 1.38it/s]
69%|βββββββ | 25/36 [00:24<00:08, 1.31it/s]
72%|ββββββββ | 26/36 [00:24<00:07, 1.35it/s]
75%|ββββββββ | 27/36 [00:25<00:06, 1.45it/s]
78%|ββββββββ | 28/36 [00:26<00:06, 1.18it/s]
81%|ββββββββ | 29/36 [00:28<00:08, 1.22s/it]
83%|βββββββββ | 30/36 [00:30<00:08, 1.37s/it]Traceback (most recent call last): |
|
File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 790, in <module> |
|
main() |
|
File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 759, in main |
|
metrics = trainer.evaluate() |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/trainer.py", line 3666, in evaluate |
|
output = eval_loop( |
|
^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/trainer.py", line 3857, in evaluation_loop |
|
losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/trainer.py", line 4075, in prediction_step |
|
loss, outputs = self.compute_loss(model, inputs, return_outputs=True) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/trainer.py", line 3363, in compute_loss |
|
outputs = model(**inputs) |
|
^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl |
|
return self._call_impl(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl |
|
return forward_call(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/accelerate/utils/operations.py", line 819, in forward |
|
return model_forward(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/accelerate/utils/operations.py", line 807, in __call__ |
|
return convert_to_fp32(self.model_forward(*args, **kwargs)) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 43, in decorate_autocast |
|
return func(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 2228, in forward |
|
outputs = self.wav2vec2( |
|
^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl |
|
return self._call_impl(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl |
|
return forward_call(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1809, in forward |
|
extract_features = self.feature_extractor(input_values) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl |
|
return self._call_impl(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl |
|
return forward_call(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 463, in forward |
|
hidden_states = conv_layer(hidden_states) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl |
|
return self._call_impl(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl |
|
return forward_call(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 335, in forward |
|
hidden_states = self.layer_norm(hidden_states) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl |
|
return self._call_impl(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl |
|
return forward_call(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 202, in forward |
|
return F.layer_norm( |
|
^^^^^^^^^^^^^ |
|
File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/torch/nn/functional.py", line 2576, in layer_norm |
|
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacity of 15.77 GiB of which 1.55 GiB is free. Including non-PyTorch memory, this process has 14.21 GiB memory in use. Of the allocated memory 11.68 GiB is allocated by PyTorch, and 2.17 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html |
|
wandb: - 0.011 MB of 0.011 MB uploaded
wandb: \ 0.033 MB of 0.033 MB uploaded
wandb: π View run eval_pd2000_s300_shuff100_hindi at: https://wandb.ai/priyanshipal/huggingface/runs/alv0f5i7 |
|
wandb: βοΈ View project at: https://wandb.ai/priyanshipal/huggingface |
|
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) |
|
wandb: Find logs at: ./wandb/run-20240822_151726-alv0f5i7/logs |
|
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information. |
|
|