[2025-04-27 14:34:27,301] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. [2025-04-27 14:34:29,565] [WARNING] [runner.py:215:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. Detected VISIBLE_DEVICES=0,1,2,3,4,5: setting --include=localhost:0,1,2,3,4,5 [2025-04-27 14:34:29,565] [INFO] [runner.py:605:main] cmd = /home/jg9904/.conda/envs/rlenv/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgMywgNCwgNV19 --master_addr=127.0.0.1 --master_port=29650 --enable_each_rank_log=None train.py --deepspeed scripts/newzero3.json --seed 26 --model_name_or_path /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL --train_tokenized_file /scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl --output_dir /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1 --per_device_train_batch_size 1 --gradient_accumulation_steps 2 --evaluation_strategy no --save_strategy no --learning_rate 9e-7 --lr_scheduler_type cosine --save_only_model True --remove_unused_columns False --warmup_ratio 0.03 --num_train_epochs 2 --logging_steps 1 --report_to tensorboard --gradient_checkpointing True --overwrite_output_dir --bf16 True [2025-04-27 14:34:31,012] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. [2025-04-27 14:34:33,028] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3, 4, 5]} [2025-04-27 14:34:33,028] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=6, node_rank=0 [2025-04-27 14:34:33,028] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(, {'localhost': [0, 1, 2, 3, 4, 5]}) [2025-04-27 14:34:33,028] [INFO] [launch.py:164:main] dist_world_size=6 [2025-04-27 14:34:33,028] [INFO] [launch.py:168:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 [2025-04-27 14:34:33,028] [INFO] [launch.py:256:main] process 2201280 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=0', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:33,034] [INFO] [launch.py:256:main] process 2201281 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=1', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:33,034] [INFO] [launch.py:256:main] process 2201282 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=2', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:33,035] [INFO] [launch.py:256:main] process 2201283 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=3', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:33,035] [INFO] [launch.py:256:main] process 2201284 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=4', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:33,035] [INFO] [launch.py:256:main] process 2201285 spawned with command: ['/home/jg9904/.conda/envs/rlenv/bin/python', '-u', 'train.py', '--local_rank=5', '--deepspeed', 'scripts/newzero3.json', '--seed', '26', '--model_name_or_path', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL', '--train_tokenized_file', '/scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/data/14K_reward-r2.jsonl', '--output_dir', '/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '2', '--evaluation_strategy', 'no', '--save_strategy', 'no', '--learning_rate', '9e-7', '--lr_scheduler_type', 'cosine', '--save_only_model', 'True', '--remove_unused_columns', 'False', '--warmup_ratio', '0.03', '--num_train_epochs', '2', '--logging_steps', '1', '--report_to', 'tensorboard', '--gradient_checkpointing', 'True', '--overwrite_output_dir', '--bf16', 'True'] [2025-04-27 14:34:36,467] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. [2025-04-27 14:34:36,548] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-04-27 14:34:36,562] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-04-27 14:34:36,587] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. [2025-04-27 14:34:36,668] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-04-27 14:34:36,669] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. Warning: The cache directory for DeepSpeed Triton autotune, /home/jg9904/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:38,707] [INFO] [comm.py:658:init_distributed] cdb=None /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:38,936] [INFO] [comm.py:658:init_distributed] cdb=None [2025-04-27 14:34:38,936] [INFO] [comm.py:689:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:38,948] [INFO] [comm.py:658:init_distributed] cdb=None /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:38,978] [INFO] [comm.py:658:init_distributed] cdb=None /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:39,046] [INFO] [comm.py:658:init_distributed] cdb=None /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-04-27 14:34:39,081] [INFO] [comm.py:658:init_distributed] cdb=None WARNING:__main__:Process rank: 4, device: cuda:4, n_gpu: 1 WARNING:__main__:Process rank: 2, device: cuda:2, n_gpu: 1 WARNING:__main__:Process rank: 3, device: cuda:3, n_gpu: 1 WARNING:__main__:Process rank: 1, device: cuda:1, n_gpu: 1 WARNING:__main__:Process rank: 0, device: cuda:0, n_gpu: 1 INFO:__main__:Training parameters CustomTrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=scripts/newzero3.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_steps=None, eval_strategy=no, eval_use_gather_object=False, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=2, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=None, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, kl_coeff=0.0, label_names=None, label_smoothing_factor=0.0, learning_rate=9e-07, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/runs/Apr27_14-34-38_della-j16g2, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1.0, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2.0, optim=adamw_torch, optim_args=None, optim_target_modules=None, output_dir=/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=/scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1, save_on_each_node=False, save_only_model=True, save_safetensors=True, save_steps=500, save_strategy=no, save_total_limit=None, seed=26, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tp_size=0, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.03, warmup_steps=0, weight_decay=0.0, ) [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file vocab.json [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file merges.txt [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file tokenizer_config.json [INFO|tokenization_utils_base.py:2058] 2025-04-27 14:34:39,869 >> loading file chat_template.jinja [2025-04-27 14:34:39,910] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [WARNING|logging.py:329] 2025-04-27 14:34:39,913 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [2025-04-27 14:34:39,917] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [WARNING|logging.py:329] 2025-04-27 14:34:39,919 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [2025-04-27 14:34:39,927] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [2025-04-27 14:34:39,928] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [WARNING|logging.py:329] 2025-04-27 14:34:39,929 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:329] 2025-04-27 14:34:39,930 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. WARNING:__main__:Process rank: 5, device: cuda:5, n_gpu: 1 [INFO|tokenization_utils_base.py:2323] 2025-04-27 14:34:40,171 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|configuration_utils.py:697] 2025-04-27 14:34:40,171 >> loading configuration file /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL/config.json [INFO|configuration_utils.py:771] 2025-04-27 14:34:40,173 >> Model config Qwen2Config { "architectures": [ "Qwen2ForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 32768, "max_window_layers": 70, "model_type": "qwen2", "num_attention_heads": 40, "num_hidden_layers": 48, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 1000000.0, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.3", "use_cache": false, "use_sliding_window": false, "vocab_size": 152064 } [INFO|modeling_utils.py:1151] 2025-04-27 14:34:40,207 >> loading weights file /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL/model.safetensors.index.json [INFO|modeling_utils.py:1225] 2025-04-27 14:34:40,208 >> Will use torch_dtype=torch.bfloat16 as defined in model's config object [INFO|modeling_utils.py:2170] 2025-04-27 14:34:40,208 >> Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. [INFO|modeling_utils.py:3747] 2025-04-27 14:34:40,208 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model [2025-04-27 14:34:40,208] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [WARNING|logging.py:329] 2025-04-27 14:34:40,211 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [INFO|configuration_utils.py:1139] 2025-04-27 14:34:40,216 >> Generate config GenerationConfig { "bos_token_id": 151643, "eos_token_id": 151645, "use_cache": false } [2025-04-27 14:34:40,291] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [WARNING|logging.py:329] 2025-04-27 14:34:40,294 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [2025-04-27 14:34:59,946] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 579, num_elems = 14.77B Loading checkpoint shards: 0%| | 0/6 [00:00> All model checkpoint weights were used when initializing Qwen2ForCausalLM. [INFO|modeling_utils.py:4995] 2025-04-27 14:35:12,270 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL. If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. [INFO|configuration_utils.py:1092] 2025-04-27 14:35:12,274 >> loading configuration file /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL/generation_config.json [INFO|configuration_utils.py:1139] 2025-04-27 14:35:12,275 >> Generate config GenerationConfig { "bos_token_id": 151643, "do_sample": true, "eos_token_id": [ 151645, 151643 ], "pad_token_id": 151643, "repetition_penalty": 1.05, "temperature": 0.7, "top_k": 20, "top_p": 0.8 } /scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/train.py:274: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `OfflineREINFORCETrainer.__init__`. Use `processing_class` instead. trainer = OfflineREINFORCETrainer( Using custom data configuration default-74d43af3a2a8aa75 INFO:datasets.builder:Using custom data configuration default-74d43af3a2a8aa75 Loading Dataset Infos from /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/datasets/packaged_modules/json INFO:datasets.info:Loading Dataset Infos from /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/datasets/packaged_modules/json Overwrite dataset info from restored data version if exists. INFO:datasets.builder:Overwrite dataset info from restored data version if exists. Loading Dataset info from /home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092 INFO:datasets.info:Loading Dataset info from /home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092 Found cached dataset json (/home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092) INFO:datasets.builder:Found cached dataset json (/home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092) Loading Dataset info from /home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092 INFO:datasets.info:Loading Dataset info from /home/jg9904/.cache/huggingface/datasets/json/default-74d43af3a2a8aa75/0.0.0/f4e89e8750d5d5ffbef2c078bf0ddfedef29dc2faff52a6255cf513c05eb1092 /scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/train.py:274: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `OfflineREINFORCETrainer.__init__`. Use `processing_class` instead. trainer = OfflineREINFORCETrainer( /scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/train.py:274: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `OfflineREINFORCETrainer.__init__`. Use `processing_class` instead. trainer = OfflineREINFORCETrainer( /scratch/gpfs/jg9904/cogbehaveRL/RL/offline_rl_v2/train.py:274: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `OfflineREINFORCETrainer.__init__`. Use `processing_class` instead. trainer = OfflineREINFORCETrainer( [INFO|trainer.py:748] 2025-04-27 14:35:12,471 >> Using auto half precision backend INFO:__main__:*** Train *** [INFO|deepspeed.py:386] 2025-04-27 14:35:12,704 >> Detected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB) Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Emitting ninja build file /home/jg9904/.cache/torch_extensions/py310_cu121/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 0.646265983581543 seconds Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Emitting ninja build file /home/jg9904/.cache/torch_extensions/py310_cu121/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 0.8059849739074707 seconds Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Emitting ninja build file /home/jg9904/.cache/torch_extensions/py310_cu121/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 0.504305362701416 seconds Installed CUDA version 12.8 does not match the version torch was compiled with 12.1 but since the APIs are compatible, accepting this combination Using /home/jg9904/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Emitting ninja build file /home/jg9904/.cache/torch_extensions/py310_cu121/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 0.5033798217773438 seconds Loading extension module cpu_adam... Loading extension module cpu_adam... Time to load cpu_adam op: 0.8922135829925537 seconds Time to load cpu_adam op: 0.825369119644165 seconds Adam Optimizer #0 is created with AVX512 arithmetic capability. Config: alpha=0.000001, betas=(0.900000, 0.999000), weight_decay=0.010000, adam_w=1 [2025-04-27 14:35:13,719] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed info: version=0.16.5, git-hash=unknown, git-branch=unknown [2025-04-27 14:35:13,719] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 6 [2025-04-27 14:35:13,730] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False [2025-04-27 14:35:13,731] [INFO] [logging.py:107:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2025-04-27 14:35:13,731] [INFO] [logging.py:107:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer [2025-04-27 14:35:13,753] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam [2025-04-27 14:35:13,753] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type= [2025-04-27 14:35:13,753] [INFO] [logging.py:107:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False [2025-04-27 14:35:13,753] [INFO] [logging.py:107:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer [2025-04-27 14:35:13,877] [INFO] [utils.py:781:see_memory_usage] Stage 3 initialize beginning [2025-04-27 14:35:13,878] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 2.9 GB CA 0.0 GB Max_CA 3 GB [2025-04-27 14:35:13,878] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 82.67 GB, percent = 8.2% [2025-04-27 14:35:13,880] [INFO] [stage3.py:170:__init__] Reduce bucket size 100000000 [2025-04-27 14:35:13,880] [INFO] [stage3.py:171:__init__] Prefetch bucket size 100000000 [2025-04-27 14:35:13,988] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [begin] [2025-04-27 14:35:13,988] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:13,989] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 82.67 GB, percent = 8.2% Parameter Offload: Total persistent parameters: 840704 in 241 params [2025-04-27 14:35:14,134] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [end] [2025-04-27 14:35:14,134] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:14,135] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 82.68 GB, percent = 8.2% [2025-04-27 14:35:14,237] [INFO] [utils.py:781:see_memory_usage] Before creating fp16 partitions [2025-04-27 14:35:14,237] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:14,237] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 82.68 GB, percent = 8.2% [2025-04-27 14:35:34,767] [INFO] [utils.py:781:see_memory_usage] After creating fp16 partitions: 24 [2025-04-27 14:35:34,768] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:34,768] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 142.43 GB, percent = 14.1% [2025-04-27 14:35:35,072] [INFO] [utils.py:781:see_memory_usage] Before creating fp32 partitions [2025-04-27 14:35:35,073] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:35,073] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 147.59 GB, percent = 14.7% [2025-04-27 14:35:37,493] [INFO] [utils.py:781:see_memory_usage] After creating fp32 partitions [2025-04-27 14:35:37,494] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:37,494] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 187.16 GB, percent = 18.6% [2025-04-27 14:35:37,671] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states [2025-04-27 14:35:37,672] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:37,672] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 193.51 GB, percent = 19.2% [2025-04-27 14:35:43,524] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states [2025-04-27 14:35:43,525] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB [2025-04-27 14:35:43,525] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 268.0 GB, percent = 26.6% [2025-04-27 14:35:43,607] [INFO] [stage3.py:534:_setup_for_real_optimizer] optimizer state initialized /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/data/data_collator.py:741: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:278.) batch["labels"] = torch.tensor(batch["labels"], dtype=torch.int64) /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/data/data_collator.py:741: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:278.) batch["labels"] = torch.tensor(batch["labels"], dtype=torch.int64) /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/data/data_collator.py:741: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:278.) batch["labels"] = torch.tensor(batch["labels"], dtype=torch.int64) /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/data/data_collator.py:741: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:278.) batch["labels"] = torch.tensor(batch["labels"], dtype=torch.int64) /home/jg9904/.conda/envs/rlenv/lib/python3.10/site-packages/transformers/data/data_collator.py:741: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:278.) batch["labels"] = torch.tensor(batch["labels"], dtype=torch.int64) [2025-04-27 14:35:47,490] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer [2025-04-27 14:35:47,491] [INFO] [utils.py:782:see_memory_usage] MA 0.19 GB Max_MA 3.09 GB CA 3.09 GB Max_CA 3 GB [2025-04-27 14:35:47,491] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 315.26 GB, percent = 31.3% [2025-04-27 14:35:47,491] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer_Stage3 [2025-04-27 14:35:47,491] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None [2025-04-27 14:35:47,491] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed LR Scheduler = None [2025-04-27 14:35:47,491] [INFO] [logging.py:107:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999)] [2025-04-27 14:35:47,492] [INFO] [config.py:1000:print] DeepSpeedEngine configuration: [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] activation_checkpointing_config { "partition_activations": false, "contiguous_memory_optimization": false, "cpu_checkpointing": false, "number_checkpoints": null, "synchronize_checkpoint_boundary": false, "profile": false } [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'intra_op_parallelism': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False} [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] amp_enabled .................. False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] amp_params ................... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] autotuning_config ............ { "enabled": false, "start_step": null, "end_step": null, "metric_path": null, "arg_mappings": null, "metric": "throughput", "model_info": null, "results_dir": "autotuning_results", "exps_dir": "autotuning_exps", "overwrite": true, "fast": true, "start_profile_step": 3, "end_profile_step": 5, "tuner_type": "gridsearch", "tuner_early_stopping": 5, "tuner_num_trials": 50, "model_info_path": null, "mp_size": 1, "max_train_batch_size": null, "min_train_batch_size": 1, "max_train_micro_batch_size_per_gpu": 1.024000e+03, "min_train_micro_batch_size_per_gpu": 1, "num_tuning_micro_batch_sizes": 3 } [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] bfloat16_enabled ............. True [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] bfloat16_immediate_grad_update True [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] checkpoint_parallel_write_pipeline False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] checkpoint_tag_validation_enabled True [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] checkpoint_tag_validation_fail False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] comms_config ................. [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] communication_data_type ...... None [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] curriculum_enabled_legacy .... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] curriculum_params_legacy ..... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'pin_memory': False, 'curriculum_learning': {'enabled': False}, 'dynamic_batching': {'enabled': False, 'lr_scaling_method': 'linear', 'min_batch_size': 1, 'max_batch_size': None, 'sequence_picking_order': 'dataloader', 'verbose': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] data_efficiency_enabled ...... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] dataloader_drop_last ......... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] disable_allgather ............ False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] dump_state ................... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] dynamic_loss_scale_args ...... None [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_enabled ........... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_gas_boundary_resolution 1 [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_layer_name ........ bert.encoder.layer [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_layer_num ......... 0 [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_max_iter .......... 100 [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_stability ......... 1e-06 [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_tol ............... 0.01 [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] eigenvalue_verbose ........... False [2025-04-27 14:35:47,493] [INFO] [config.py:1004:print] elasticity_enabled ........... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] flops_profiler_config ........ { "enabled": false, "recompute_fwd_factor": 0.0, "profile_step": 1, "module_depth": -1, "top_modules": 1, "detailed": true, "output_file": null } [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] fp16_auto_cast ............... None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] fp16_enabled ................. False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] fp16_master_weights_and_gradients False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] global_rank .................. 0 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] grad_accum_dtype ............. None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] gradient_accumulation_steps .. 2 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] gradient_clipping ............ 1.0 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] gradient_predivide_factor .... 1.0 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] graph_harvesting ............. False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] initial_dynamic_scale ........ 1 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] load_universal_checkpoint .... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] loss_scale ................... 1.0 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] memory_breakdown ............. False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] mics_hierarchial_params_gather False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] mics_shard_size .............. -1 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] nebula_config ................ { "enabled": false, "persistent_storage_path": null, "persistent_time_interval": 100, "num_of_version_in_retention": 2, "enable_nebula_load": true, "load_path": null } [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] optimizer_legacy_fusion ...... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] optimizer_name ............... None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] optimizer_params ............. None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True} [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] pld_enabled .................. False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] pld_params ................... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] prescale_gradients ........... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] scheduler_name ............... None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] scheduler_params ............. None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] seq_parallel_communication_data_type torch.float32 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] sparse_attention ............. None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] sparse_gradients_enabled ..... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] steps_per_print .............. inf [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] timers_config ................ enabled=True synchronized=True [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] train_batch_size ............. 12 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] train_micro_batch_size_per_gpu 1 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] use_data_before_expert_parallel_ False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] use_node_local_storage ....... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] wall_clock_breakdown ......... False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] weight_quantization_config ... None [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] world_size ................... 6 [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] zero_allow_untested_optimizer True [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=100000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='cpu', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=True) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=True, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=100000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=100000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=100000000 max_reuse_distance=100000000 gather_16bit_weights_on_model_save=True module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] zero_enabled ................. True [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] zero_force_ds_cpu_optimizer .. True [2025-04-27 14:35:47,494] [INFO] [config.py:1004:print] zero_optimization_stage ...... 3 [2025-04-27 14:35:47,494] [INFO] [config.py:990:print_user_config] json = { "fp16": { "enabled": false }, "bf16": { "enabled": true }, "train_micro_batch_size_per_gpu": 1, "gradient_accumulation_steps": 2, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1.000000e+08, "reduce_bucket_size": 1.000000e+08, "stage3_prefetch_bucket_size": 1.000000e+08, "stage3_param_persistence_threshold": 1.000000e+05, "stage3_max_live_parameters": 1.000000e+08, "stage3_max_reuse_distance": 1.000000e+08, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_clipping": 1.0, "wall_clock_breakdown": false, "steps_per_print": inf, "zero_allow_untested_optimizer": true } [INFO|trainer.py:2409] 2025-04-27 14:35:47,495 >> ***** Running training ***** [INFO|trainer.py:2410] 2025-04-27 14:35:47,495 >> Num examples = 8,460 [INFO|trainer.py:2411] 2025-04-27 14:35:47,495 >> Num Epochs = 2 [INFO|trainer.py:2412] 2025-04-27 14:35:47,495 >> Instantaneous batch size per device = 1 [INFO|trainer.py:2415] 2025-04-27 14:35:47,495 >> Total train batch size (w. parallel, distributed & accumulation) = 12 [INFO|trainer.py:2416] 2025-04-27 14:35:47,495 >> Gradient Accumulation steps = 2 [INFO|trainer.py:2417] 2025-04-27 14:35:47,495 >> Total optimization steps = 1,410 [INFO|trainer.py:2418] 2025-04-27 14:35:47,496 >> Number of trainable parameters = 14,770,033,664 0%| | 0/1410 [00:00> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 16494.942, 'train_samples_per_second': 1.026, 'train_steps_per_second': 0.085, 'train_loss': 0.022131871465586973, 'epoch': 2.0} 100%|██████████| 1410/1410 [4:34:54<00:00, 11.48s/it] 100%|██████████| 1410/1410 [4:34:54<00:00, 11.70s/it] [INFO|trainer.py:3966] 2025-04-27 19:10:51,571 >> Saving model checkpoint to /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1 [INFO|configuration_utils.py:423] 2025-04-27 19:10:51,585 >> Configuration saved in /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/config.json [INFO|configuration_utils.py:908] 2025-04-27 19:10:51,586 >> Configuration saved in /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/generation_config.json [2025-04-27 19:10:59,328] [INFO] [launch.py:351:main] Process 2201285 exits successfully. [2025-04-27 19:11:02,343] [INFO] [launch.py:351:main] Process 2201284 exits successfully. [2025-04-27 19:11:04,353] [INFO] [launch.py:351:main] Process 2201282 exits successfully. [2025-04-27 19:11:07,368] [INFO] [launch.py:351:main] Process 2201281 exits successfully. [2025-04-27 19:11:10,383] [INFO] [launch.py:351:main] Process 2201283 exits successfully. [INFO|modeling_utils.py:3594] 2025-04-27 19:11:10,418 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2510] 2025-04-27 19:11:10,421 >> tokenizer config file saved in /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/tokenizer_config.json [INFO|tokenization_utils_base.py:2519] 2025-04-27 19:11:10,421 >> Special tokens file saved in /scratch/gpfs/jg9904/saved_models/Qwen2.5-14B-Instruct-RL-2.1/special_tokens_map.json ***** train metrics ***** epoch = 2.0 total_flos = 421479GF train_loss = 0.0221 train_runtime = 4:34:54.94 train_samples = 8460 train_samples_per_second = 1.026 train_steps_per_second = 0.085 [2025-04-27 19:11:17,418] [INFO] [launch.py:351:main] Process 2201280 exits successfully.