[2025-05-09 01:07:40] Created output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 [2025-05-09 01:07:40] Chat mode disabled [2025-05-09 01:07:40] Model size is over 3B (7 B). Using LoRA training. [2025-05-09 01:07:40] Adjusted learning rate for LoRA: 2e-4 [2025-05-09 01:07:40] No QA format data will be used [2025-05-09 01:07:40] ======================================= [2025-05-09 01:07:40] Starting training for model: Qwen/Qwen2.5-7B [2025-05-09 01:07:40] ======================================= [2025-05-09 01:07:40] CUDA_VISIBLE_DEVICES: 0,1,2,3,4,5,6,7 [2025-05-09 01:07:40] WANDB_PROJECT: wikidyk-ar [2025-05-09 01:07:40] DATA_PATH: data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json [2025-05-09 01:07:40] Global Batch Size: 256 [2025-05-09 01:07:40] Data Size: -1 [2025-05-09 01:07:40] Executing command: torchrun --nproc_per_node "8" --master-port 29503 src/train.py --model_name_or_path "Qwen/Qwen2.5-7B" --data_path "data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json" --output_dir "train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000" --num_upsample "1000" --per_device_train_batch_size "32" --gradient_accumulation_steps "1" --learning_rate "2e-4" --num_train_epochs "1" --model_max_length "4096" --report_to wandb --logging_steps 50 --save_strategy no --bf16 True --use_flash_attention_2 True --qa_data_ratio "-1" --predict_mask "false" --use_lora --lora_r 32 --lora_alpha 16 [2025-05-09 01:07:40] Training started at 2025年 05月 09日 星期五 01:07:40 CST W0509 01:07:41.339000 3290227 site-packages/torch/distributed/run.py:792] W0509 01:07:41.339000 3290227 site-packages/torch/distributed/run.py:792] ***************************************** W0509 01:07:41.339000 3290227 site-packages/torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0509 01:07:41.339000 3290227 site-packages/torch/distributed/run.py:792] ***************************************** WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 WARNING:root:Output directory: train_results_ar/Qwen_Qwen2.5-7B_full_upsample1000 The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead. You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Loading checkpoint shards: 0%| | 0/4 [00:00