--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-8B-Base tags: - axolotl - generated_from_trainer datasets: - nate-rahn/0613-wc_attrs_sft_dset model-index: - name: 0613-sft_len_wc_attrs-qwen3_8b_base results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.9.1` ```yaml # Name 0613-sft_len_wc_attrs-qwen3_8b_base # axolotl train red_team_agent/run/t0613/sft_len_wc_attrs-qwen3_8b_base.yaml base_model: Qwen/Qwen3-8B-Base model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: false # --- Dataset Configuration --- datasets: - path: nate-rahn/0613-wc_attrs_sft_dset type: chat_template # Use the chat_template processing strategy # --- Custom Template & Role Mapping --- chat_template: chatml # Specify we are using a custom jinja template below field_messages: messages # Assumes your dataset has a "bad_messages" key with a list of dicts message_property_mappings: # Assumes each dict in the list has "role" and "content" keys role: role content: content roles: # Define the roles expected in your dataset for mapping user: ["user"] # Map "user" role in data to internal "user" assistant: ["assistant"] # Map "assistant" role in data to internal "assistant" system: ["system"] # Map "system" role in data to internal "system" # --- Training Target --- roles_to_train: ["assistant"] train_on_eos: turn # Train on the EOS token at the end of each 'user' turn dataset_prepared_path: /workspace/data/last_run_prepared # --- Training Hyperparameters --- sequence_len: 2048 # Adjust based on your dataset and GPU memory sample_packing: true # Pack multiple sequences into one example for efficiency eval_sample_packing: true pad_to_sequence_len: true # Pad sequences to sequence_len # Full Parameter Finetuning (No adapter specified) # adapter: # This is intentionally left blank/removed for full finetuning # Performance & Precision (H100s excel with bf16) bf16: true tf32: true flash_attention: true # for qwen # Batching (Adjust based on GPU memory) # Effective global batch size = micro_batch_size * gradient_accumulation_steps * num_gpus (4) # Start low for full finetuning, e.g., 1 * 16 * 4 = 64 micro_batch_size: 2 gradient_accumulation_steps: 32 eval_batch_size: 16 # Can often be slightly higher than micro_batch_size # Optimizer & Scheduler optimizer: adamw_torch_fused # Good choice for newer GPUs learning_rate: 1e-5 # Common starting point for full SFT weight_decay: 0.01 lr_scheduler: cosine # Standard scheduler warmup_steps: 50 max_grad_norm: 1.0 # Training Duration & Evaluation/Saving num_epochs: 1 # Adjust as needed, start with 1-3 for SFT val_set_size: 0.001 logging_steps: 1 evals_per_epoch: 20 saves_per_epoch: 2 # Save 4 times per epoch (adjust based on dataset size) save_total_limit: 1 # Keep only the last 1 checkpoints # Memory Saving gradient_checkpointing: true # Essential for full finetuning gradient_checkpointing_kwargs: use_reentrant: false # Prefer non-reentrant if possible # --- FSDP Configuration (for 4xH100) --- fsdp: - full_shard - auto_wrap fsdp_config: fsdp_offload_params: false # Should not be needed with H100 VRAM fsdp_sync_module_states: true # Important for correctness fsdp_use_orig_params: false # Recommended for memory saving with FSDP fsdp_state_dict_type: SHARDED_STATE_DICT # Options: FULL_STATE_DICT or SHARDED_STATE_DICT (saves disk space) # fsdp_transformer_layer_cls_to_wrap: 'Gemma3DecoderLayer' fsdp_transformer_layer_cls_to_wrap: 'Qwen3DecoderLayer' # fsdp_activation_checkpointing: true # Alternative way to enable activation checkpointing for FSDP # --- Special Tokens --- # Define based on your custom template's terminators. Qwen already uses <|im_end|> special_tokens: eos_token: "<|im_end|>" # eos_token: "" # --- Logging & Saving --- output_dir: /workspace/red-team-agent/runs/0613-sft_len_wc_attrs-qwen3_8b_base # Local output directory # W&B Logging wandb_project: "red-team-agent" # Name your W&B project wandb_entity: "nate" # IMPORTANT: Replace with your W&B username or team name wandb_name: "0613-sft_len_wc_attrs-qwen3_8b_base" # Descriptive run name # wandb_log_model: "checkpoint" # Log model checkpoints to W&B Artifacts # Hugging Face Hub Upload hub_model_id: "nate-rahn/0613-sft_len_wc_attrs-qwen3_8b_base" # IMPORTANT: Replace with your desired HF repo ID hub_strategy: "end" # Push checkpoints to the Hub (`"end"` pushes only the final model) hf_use_auth_token: true # Required for pushing to the Hub (ensure you're logged in) # --- Misc --- seed: 42 ```

# 0613-sft_len_wc_attrs-qwen3_8b_base This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) on the nate-rahn/0613-wc_attrs_sft_dset dataset. It achieves the following results on the evaluation set: - Loss: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4347 | 0.0010 | 1 | 2.6739 | | 1.5017 | 0.0503 | 51 | 1.4456 | | 1.1364 | 0.1006 | 102 | 1.2134 | | 0.8372 | 0.1509 | 153 | 1.1617 | | 1.2334 | 0.2012 | 204 | 1.1215 | | 1.0855 | 0.2515 | 255 | 1.1147 | | 0.8381 | 0.3018 | 306 | 1.0716 | | 1.2104 | 0.3521 | 357 | 1.1103 | | 1.0675 | 0.4024 | 408 | 1.0673 | | 0.922 | 0.4527 | 459 | 1.0728 | | 1.236 | 0.5030 | 510 | 1.0328 | | 1.0469 | 0.5533 | 561 | 1.0385 | | 0.8749 | 0.6036 | 612 | 1.0406 | | 1.361 | 0.6539 | 663 | 1.0145 | | 1.0454 | 0.7042 | 714 | 1.0028 | | 0.8827 | 0.7545 | 765 | 0.9996 | | 0.7909 | 0.8048 | 816 | 0.9933 | | 1.0521 | 0.8551 | 867 | 0.9874 | | 0.9136 | 0.9054 | 918 | 0.9850 | | 0.6762 | 0.9557 | 969 | 0.9844 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.5.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1