[2025-05-08 14:13:01] Created output directory: train_results_ar/meta-llama_Llama-2-7b-hf_full_upsample1000 [2025-05-08 14:13:02] Chat mode disabled [2025-05-08 14:13:02] Set MODEL_MAX_LENGTH to 4096 for Llama-2 model [2025-05-08 14:13:02] Model size is over 3B (7 B). Using LoRA training. [2025-05-08 14:13:02] Adjusted learning rate for LoRA: 2e-4 [2025-05-08 14:13:02] No QA format data will be used [2025-05-08 14:13:02] ======================================= [2025-05-08 14:13:02] Starting training for model: meta-llama/Llama-2-7b-hf [2025-05-08 14:13:02] ======================================= [2025-05-08 14:13:02] CUDA_VISIBLE_DEVICES: 0,1,2,3,4,5,6,7 [2025-05-08 14:13:02] WANDB_PROJECT: wikidyk-ar [2025-05-08 14:13:02] DATA_PATH: data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json [2025-05-08 14:13:02] Global Batch Size: 256 [2025-05-08 14:13:02] Data Size: -1 [2025-05-08 14:13:02] Executing command: torchrun --nproc_per_node "8" --master-port 29503 src/train.py --model_name_or_path "meta-llama/Llama-2-7b-hf" --data_path "data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json" --output_dir "train_results_ar/meta-llama_Llama-2-7b-hf_full_upsample1000" --num_upsample "1000" --per_device_train_batch_size "32" --gradient_accumulation_steps "1" --learning_rate "2e-4" --num_train_epochs "1" --model_max_length "4096" --report_to wandb --logging_steps 50 --save_strategy no --bf16 True --use_flash_attention_2 True --qa_data_ratio "-1" --predict_mask "false" --use_lora --lora_r 32 --lora_alpha 16 [2025-05-08 14:13:02] Training started at 2025年 05月 08日 星期四 14:13:02 CST W0508 14:13:03.167000 3287123 site-packages/torch/distributed/run.py:792] W0508 14:13:03.167000 3287123 site-packages/torch/distributed/run.py:792] ***************************************** W0508 14:13:03.167000 3287123 site-packages/torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0508 14:13:03.167000 3287123 site-packages/torch/distributed/run.py:792] ***************************************** WARNING:root:Output directory: train_results_ar/meta-llama_Llama-2-7b-hf_full_upsample1000 The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead. You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Loading checkpoint shards: 0%| | 0/2 [00:00