See axolotl config
axolotl version: 0.9.2
base_model: timarni/qwen3_dpo_100k
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/MNLP_STEM_IT
type: alpaca
split: train
shuffle_merged_datasets: true
val_set_size: 0.1
output_dir: ./outputs/dpo_100k_STEM_IT
dataset_prepared_path: last_run_prepared
sequence_len: 4096 #2048
sample_packing: true # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug)
eval_sample_packing: true
pad_to_sequence_len: true
# train_on_inputs: true # NEW
# group_by_length: false NEW?
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: dpo_100k_STEM_IT
wandb_log_model:
gradient_accumulation_steps: 16 # 2
micro_batch_size: 2 # 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.00005
# cosine_min_lr_ratio: 0.1
warmup_ratio: 0.05
weight_decay: 0.01
bf16: auto
tf32: true
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
gradient_clipping: 1.0 # or max_grad_norm?
flash_attention: true
evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 20
special_tokens:
outputs/dpo_100k_STEM_IT
This model is a fine-tuned version of timarni/qwen3_dpo_100k on the timarni/MNLP_STEM_IT dataset. It achieves the following results on the evaluation set:
- Loss: 0.1704
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 12
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0297 | 0.0124 | 1 | 1.0920 |
0.1602 | 0.2479 | 20 | 0.1863 |
0.1495 | 0.4957 | 40 | 0.1758 |
0.1409 | 0.7436 | 60 | 0.1709 |
0.1497 | 0.9915 | 80 | 0.1653 |
0.1118 | 1.2479 | 100 | 0.1638 |
0.1119 | 1.4957 | 120 | 0.1595 |
0.1068 | 1.7436 | 140 | 0.1590 |
0.1085 | 1.9915 | 160 | 0.1571 |
0.0833 | 2.2479 | 180 | 0.1672 |
0.0759 | 2.4957 | 200 | 0.1706 |
0.0875 | 2.7436 | 220 | 0.1705 |
0.0756 | 2.9915 | 240 | 0.1704 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 19