Dataset Viewer
timestamp
stringdate 2025-08-20 17:22:46
2025-08-20 17:34:43
| end_timestamp
stringdate 2025-08-20 17:23:24
2025-08-20 17:36:00
| stage_name
stringclasses 1
value | stage_number
int64 1
1
| level
stringclasses 1
value | message
stringclasses 1
value | stdout_content
stringclasses 3
values | stderr_content
stringclasses 3
values | experiment_name
stringclasses 1
value | elapsed_time_seconds
float64 38
484
| stage_complete
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|
2025-08-20T17:22:46.622902
|
2025-08-20T17:23:24.591471
|
verl_rl
| 1 |
INFO
|
Complete log capture for stage: verl_rl
|
[INFO] Starting stage: VeRL RL training - rl
[INFO] Data preparation succeeded
[INFO] Setting up ray cluster
[DEBUG] SLURM cluster info: 4 nodes, 1 GPUs/node
[INFO] Node list: c613-[012,021-022,031]
[DEBUG] Head node: c613-012
[ERROR] SLURM Ray cluster setup failed: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c613-012', 'hostname', '--ip-address']' timed out after 30 seconds
[ERROR] Stage error: RuntimeError: Failed to setup SLURM Ray cluster: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c613-012', 'hostname', '--ip-address']' timed out after 30 seconds
|
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
warnings.warn(
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 409.24it/s]
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 491.70it/s]
|
SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds
| 37.968569 | true |
2025-08-20T17:24:57.776228
|
2025-08-20T17:33:01.662028
|
verl_rl
| 1 |
INFO
|
Complete log capture for stage: verl_rl
|
[INFO] Starting stage: VeRL RL training - rl
[INFO] Data preparation succeeded
[INFO] Setting up ray cluster
[DEBUG] SLURM cluster info: 2 nodes, 1 GPUs/node
[INFO] Node list: c608-[062,071]
[DEBUG] Head node: c608-062
[DEBUG] Ray head address: 129.114.17.13:6379
[INFO] Starting Ray head on c608-062...
[INFO] Waiting for head node to initialize...
[DEBUG] Starting 1 worker nodes...
[DEBUG] Starting worker 1: c608-071
[INFO] Waiting for Ray cluster to stabilize...
[INFO] Connecting to Ray cluster at 129.114.17.13:6379...
[INFO] Ray cluster connected successfully (stats from the connection):
[INFO] Total GPUs: 2.0
[INFO] Available GPUs: 2.0
[INFO] Total CPUs: 64.0
[INFO] SLURM Ray cluster setup completed
[INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled
[DEBUG] Running verl command:
python -m verl.trainer.main_ppo custom_reward_function.reward_kwargs.format_score_weight=0.0 custom_reward_function.reward_kwargs.format_score_v2_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=1.0 custom_reward_function.reward_kwargs.sample_correctness_weight=1.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=1.0 custom_reward_function.reward_kwargs.reward_min=-1.0 custom_reward_function.reward_kwargs.reward_max=10.0 trainer.total_epochs=30 actor_rollout_ref.actor.optim.lr=5e-06 trainer.save_freq=1 trainer.test_freq=10 trainer.val_before_train=False algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=2 data.train_batch_size=256 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=8 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=True actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True hydra.run.dir=/scratch/10416/zaynesprague/skill_injection_outputs/sf/grpo_training/exp1/hydra/grpo hydra.output_subdir=null hydra.job.chdir=False actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_prompt_length=512 data.max_response_length=4096 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/prefetched_models/TAUR_dev__M_skills_in_rl_v2__1e6_all_tasks_sft_sft actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/prefetched_models/TAUR_dev__M_skills_in_rl_v2__1e6_all_tasks_sft_sft critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=rl_skills__8_13_25 trainer.experiment_name=SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds_rl trainer.resume_mode=disable data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=2 trainer.n_gpus_per_node=1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
2025-08-20 17:25:57,022 INFO worker.py:1554 -- Using address 129.114.17.13:6379 set in the environment variable RAY_ADDRESS
2025-08-20 17:25:57,023 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.13:6379...
2025-08-20 17:25:57,029 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
[36m(TaskRunner pid=2182093)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=2182093)[0m
Generating train split: 1000 examples [00:00, 5834.13 examples/s]
[36m(TaskRunner pid=2182093)[0m
Generating train split: 1000 examples [00:00, 3387.26 examples/s]
[36m(TaskRunner pid=2182093)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=2182093)[0m
Generating train split: 250 examples [00:00, 3255.21 examples/s]
[36m(TaskRunner pid=2182093)[0m DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version.
[36m(TaskRunner pid=2182093)[0m WARNING:2025-08-20 17:26:11,067:Waiting for register center actor xMEbdG_register_center to be ready. Elapsed time: 0 seconds out of 300 seconds.
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64b93-6315b0b139813cab7901220a;fb1310be-2aaa-4dab-8dd8-462756143c06)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[36m(WorkerDict pid=2182382)[0m Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
[36m(WorkerDict pid=2182382)[0m You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[36m(TaskRunner pid=2182093)[0m wandb: Currently logged in as: zsprague (ut_nlp_deduce) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
[36m(WorkerDict pid=1586822, ip=129.114.17.14)[0m Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
[36m(WorkerDict pid=1586822, ip=129.114.17.14)[0m You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[36m(TaskRunner pid=2182093)[0m wandb: Tracking run with wandb version 0.19.11
[36m(TaskRunner pid=2182093)[0m wandb: Run data is saved locally in /scratch/10416/zaynesprague/skill_factory_dir/skill-factory/experiments/model_training_experiments/skills_in_rl_exp/wandb/run-20250820_172707-byernxng
[36m(TaskRunner pid=2182093)[0m wandb: Run `wandb offline` to turn off syncing.
[36m(TaskRunner pid=2182093)[0m wandb: Syncing run SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds_rl
[36m(TaskRunner pid=2182093)[0m wandb: βοΈ View project at https://wandb.ai/ut_nlp_deduce/rl_skills__8_13_25
[36m(TaskRunner pid=2182093)[0m wandb: π View run at https://wandb.ai/ut_nlp_deduce/rl_skills__8_13_25/runs/byernxng
[36m(TaskRunner pid=2182093)[0m
Training Progress: 0%| | 0/90 [00:00<?, ?it/s]
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64bc1-05d5ab1e79b7ff5a52d5ada5;da86958d-1530-4633-9cab-a2afd2536c41)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64bef-485ff132075682e10b6cf396;7966189f-1312-473e-bb7d-76c414abb0d4)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64c1e-26f8d3882ad761d13f59e25c;09e96c28-b24b-4274-8c0b-f57daeb784e0)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64c4d-5af9b3390bfcd453422dec77;0d219d9d-f9de-4a58-b283-b6c1ceb2ca39)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[36m(WorkerDict pid=2182382)[0m INFO:2025-08-20 17:30:10,270:[Rank 0] Saved model to /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints/global_step_1/actor/model_world_size_2_rank_0.pt
[36m(WorkerDict pid=1586822, ip=129.114.17.14)[0m INFO:2025-08-20 17:30:18,070:[Rank 1] Saved optim to /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints/global_step_1/actor/optim_world_size_2_rank_1.pt
[36m(WorkerDict pid=1586822, ip=129.114.17.14)[0m INFO:2025-08-20 17:30:10,519:[Rank 1] Saved model to /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints/global_step_1/actor/model_world_size_2_rank_1.pt
[36m(WorkerDict pid=1586822, ip=129.114.17.14)[0m INFO:2025-08-20 17:30:18,106:[Rank 1] Saved extra_state to /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints/global_step_1/actor/extra_state_world_size_2_rank_1.pt
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64c7b-38bac0a860da4222320e446f;0e7f0285-85b8-47d4-8865-c897c68038fa)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[36m(WorkerDict pid=2182382)[0m INFO:2025-08-20 17:30:19,948:[Rank 0] Saved model config and tokenizer class to /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints/global_step_1/actor/huggingface
[36m(TaskRunner pid=2182093)[0m
Training Progress: 1%| | 1/90 [03:10<4:43:03, 190.83s/it]
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64cb0-7fc58960225eeb401fa33af7;8d4eacc7-5201-4363-8a5a-cb56ccaf707f)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64ce0-6c97d9492fb5f49e0ff4d284;1b738d51-4ce5-438b-b09f-916b329bdfda)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64d10-313d70b501e002db7557c42d;3ed4c19a-90fe-4463-b5bd-e5508e519f93)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-global_step_1.
Invalid rev id: checkpoint-global_step_1
[WARNING] Failed to upload checkpoint: global_step_1
[ERROR] Stage error: KeyboardInterrupt:
|
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
warnings.warn(
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 189.27it/s]
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 699.14it/s]
2025-08-20 17:25:48,017 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.13:6379...
2025-08-20 17:25:48,022 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
|
SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds
| 483.8858 | true |
2025-08-20T17:34:44.258488
|
2025-08-20T17:36:00.687974
|
verl_rl
| 1 |
INFO
|
Complete log capture for stage: verl_rl
|
[INFO] Starting stage: VeRL RL training - rl
[INFO] Data preparation succeeded
[INFO] Setting up ray cluster
[DEBUG] SLURM cluster info: 2 nodes, 1 GPUs/node
[INFO] Node list: c608-[062,071]
[DEBUG] Head node: c608-062
[DEBUG] Ray head address: 129.114.17.13:6379
[INFO] Starting Ray head on c608-062...
[INFO] Waiting for head node to initialize...
[DEBUG] Starting 1 worker nodes...
[DEBUG] Starting worker 1: c608-071
[INFO] Waiting for Ray cluster to stabilize...
[INFO] Connecting to Ray cluster at 129.114.17.13:6379...
[INFO] Ray cluster connected successfully (stats from the connection):
[INFO] Total GPUs: 2.0
[INFO] Available GPUs: 2.0
[INFO] Total CPUs: 64.0
[INFO] SLURM Ray cluster setup completed
[INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled
[DEBUG] Found 1 global_step directories
[DEBUG] Checking new checkpoint: global_step_1
[DEBUG] Running verl command:
python -m verl.trainer.main_ppo custom_reward_function.reward_kwargs.format_score_weight=0.0 custom_reward_function.reward_kwargs.format_score_v2_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=1.0 custom_reward_function.reward_kwargs.sample_correctness_weight=1.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=1.0 custom_reward_function.reward_kwargs.reward_min=-1.0 custom_reward_function.reward_kwargs.reward_max=10.0 trainer.total_epochs=30 actor_rollout_ref.actor.optim.lr=5e-06 trainer.save_freq=1 trainer.test_freq=10 trainer.val_before_train=False algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=2 data.train_batch_size=256 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=8 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=True actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True hydra.run.dir=/scratch/10416/zaynesprague/skill_injection_outputs/sf/grpo_training/exp1/hydra/grpo hydra.output_subdir=null hydra.job.chdir=False actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_prompt_length=512 data.max_response_length=4096 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/prefetched_models/TAUR_dev__M_skills_in_rl_v2__1e6_all_tasks_sft_sft actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/prefetched_models/TAUR_dev__M_skills_in_rl_v2__1e6_all_tasks_sft_sft critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=rl_skills__8_13_25 trainer.experiment_name=SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds_rl trainer.resume_mode=disable data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/grpo_lexical_sim/all_tasks/SBON_advanced_grpo_rewards/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=2 trainer.n_gpus_per_node=1
[DEBUG] Actor directory exists for global_step_1
[INFO] Found complete checkpoint: global_step_1
2025-08-20 17:35:36,991 INFO worker.py:1554 -- Using address 129.114.17.13:6379 set in the environment variable RAY_ADDRESS
2025-08-20 17:35:36,991 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.13:6379...
2025-08-20 17:35:36,998 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
[36m(TaskRunner pid=2187998)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=2187998)[0m
Generating train split: 1000 examples [00:00, 5663.70 examples/s]
[36m(TaskRunner pid=2187998)[0m
Generating train split: 1000 examples [00:00, 2834.31 examples/s]
[36m(TaskRunner pid=2187998)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=2187998)[0m
Generating train split: 250 examples [00:00, 2497.90 examples/s]
[36m(TaskRunner pid=2187998)[0m DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version.
[36m(TaskRunner pid=2187998)[0m WARNING:2025-08-20 17:35:48,742:Waiting for register center actor pAWhvR_register_center to be ready. Elapsed time: 0 seconds out of 300 seconds.
[ERROR] Intermediate checkpoint upload failed for global_step_1: 404 Client Error. (Request ID: Root=1-68a64dc7-63d7e2952762dd407ed95976;279ba5be-6710-4906-a1b7-b60045b600aa)
Revision Not Found for url: https://huggingface.co/api/models/TAUR-dev/M-SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds-rl/preupload/checkpoint-step-1.
Invalid rev id: checkpoint-step-1
[WARNING] Failed to upload checkpoint: global_step_1
[ERROR] Stage error: KeyboardInterrupt:
|
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
warnings.warn(
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 482.07it/s]
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββ| 12/12 [00:00<00:00, 671.18it/s]
2025-08-20 17:35:31,004 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.13:6379...
2025-08-20 17:35:31,010 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
|
SBON_advanced_grpo_rewards-TEST-grpo_adv_rwds
| 76.429486 | true |
README.md exists but content is empty.
- Downloads last month
- 5