meeting_qa / output (1).log
namanhngco's picture
Upload 10 files
ec9aa24 verified
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
wandb: WARNING The get_url method is deprecated and will be removed in a future release. Please use `run.url` instead.
Saving model checkpoint to ./results/checkpoint-10
tokenizer config file saved in ./results/checkpoint-10/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-10/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-20
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-20/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-20/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-30
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-30/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-30/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-40
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-40/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-40/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-50
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-50/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-50/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-60
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-60/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-60/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-70
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-70/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-70/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-80
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-80/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-80/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-90
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-90/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-90/special_tokens_map.json
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py:745: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Saving model checkpoint to ./results/checkpoint-100
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-v0.1/snapshots/7231864981174d9bee8c7687c24c8344414eae6b/config.json
Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": null,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.2",
"use_cache": true,
"vocab_size": 32000
}
tokenizer config file saved in ./results/checkpoint-100/tokenizer_config.json
Special tokens file saved in ./results/checkpoint-100/special_tokens_map.json
Training completed. Do not forget to share your model on huggingface.co/models =)
/usr/local/lib/python3.11/dist-packages/peft/tuners/tuners_utils.py:190: UserWarning: Already found a `peft_config` attribute in the model. This will lead to having multiple adapters in the model. Make sure to know what you are doing!
warnings.warn(
/usr/local/lib/python3.11/dist-packages/peft/peft_model.py:585: UserWarning: Found missing adapter keys while loading the checkpoint: ['base_model.model.base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.k_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.k_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.v_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.v_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.o_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.self_attn.o_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.gate_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.gate_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.up_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.up_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.down_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.0.mlp.down_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.q_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.q_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.k_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.k_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.v_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.v_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.o_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.self_attn.o_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.gate_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.gate_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.up_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.up_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.down_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.1.mlp.down_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.q_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.q_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.k_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.k_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.v_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.v_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.o_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.self_attn.o_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.gate_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.gate_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.up_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.up_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.down_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.2.mlp.down_proj.lora_B.default.weight', 'base_model.model.base_model.model.model.layers.3.self_attn.q_proj.lora_A.default.weight', 'base_model.model.base_model.model.model.layers.3.self_attn.q_proj.lora_B.default.weight', 'base_model.model.base_
warnings.warn(warn_message)
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
</s>ΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌΠΌ