See axolotl config
axolotl version: 0.9.2
base_model: Qwen/Qwen3-0.6B-Base
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/MNLP_STEM_IT_HARD
type: alpaca
split: train
shuffle_merged_datasets: true
val_set_size: 0.1
output_dir: ./outputs/qwen3_wiki_3500_it_hard
dataset_prepared_path: last_run_prepared
sequence_len: 4096 #2048
sample_packing: true # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug)
eval_sample_packing: false
pad_to_sequence_len: true
# train_on_inputs: true # NEW
# group_by_length: false NEW?
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: qwen3_wiki_3500_it_hard
wandb_log_model:
gradient_accumulation_steps: 16 # 2
micro_batch_size: 2 # 1
num_epochs: 5
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.00005
# cosine_min_lr_ratio: 0.1
warmup_steps: 20
weight_decay: 0.01
bf16: auto
tf32: true
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
gradient_clipping: 1.0 # or max_grad_norm?
flash_attention: true
evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 10
special_tokens:
outputs/qwen3_wiki_3500_it_hard
This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base on the timarni/MNLP_STEM_IT_HARD dataset. It achieves the following results on the evaluation set:
- Loss: 0.1477
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.607 | 0.1684 | 1 | 0.5910 |
0.6003 | 0.3368 | 2 | 0.5576 |
0.3919 | 0.6737 | 4 | 0.3513 |
0.1547 | 1.0 | 6 | 0.1569 |
0.1088 | 1.3368 | 8 | 0.1351 |
0.0937 | 1.6737 | 10 | 0.1307 |
0.0927 | 2.0 | 12 | 0.1296 |
0.0796 | 2.3368 | 14 | 0.1299 |
0.0665 | 2.6737 | 16 | 0.1309 |
0.0586 | 3.0 | 18 | 0.1390 |
0.046 | 3.3368 | 20 | 0.1376 |
0.0358 | 3.6737 | 22 | 0.1481 |
0.0281 | 4.0 | 24 | 0.1477 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for timarni/qwen3_base_it_hard_12
Base model
Qwen/Qwen3-0.6B-Base