See axolotl config
axolotl version: 0.9.2
base_model: Qwen/Qwen3-0.6B-Base
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/MNLP_STEM_IT_HARD # timarni/MNLP_STEM_IT
type: alpaca
split: train
shuffle_merged_datasets: true
val_set_size: 0.1
output_dir: ./outputs/base_it_hard_2
dataset_prepared_path: last_run_prepared
sequence_len: 4096 #2048
sample_packing: true # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug)
eval_sample_packing: false
pad_to_sequence_len: true
# train_on_inputs: true # NEW
# group_by_length: false NEW?
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: base_it_hard_2
wandb_log_model:
gradient_accumulation_steps: 16 # 2
micro_batch_size: 2 # 1
num_epochs: 5
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.00005
# cosine_min_lr_ratio: 0.1
warmup_ratio: 0.05
weight_decay: 0.01
bf16: auto
tf32: true
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
gradient_clipping: 1.0 # or max_grad_norm?
flash_attention: true
evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 10
special_tokens:
outputs/base_it_hard_2
This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base on the timarni/MNLP_STEM_IT_HARD dataset. It achieves the following results on the evaluation set:
- Loss: 0.1415
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.607 | 0.1684 | 1 | 0.5910 |
0.6003 | 0.3368 | 2 | 0.1913 |
0.1281 | 0.6737 | 4 | 0.3051 |
0.1363 | 1.0 | 6 | 0.1420 |
0.1004 | 1.3368 | 8 | 0.1391 |
0.0792 | 1.6737 | 10 | 0.1415 |
0.0857 | 2.0 | 12 | 0.1386 |
0.0583 | 2.3368 | 14 | 0.1382 |
0.0531 | 2.6737 | 16 | 0.1410 |
0.0635 | 3.0 | 18 | 0.1418 |
0.0468 | 3.3368 | 20 | 0.1417 |
0.0461 | 3.6737 | 22 | 0.1417 |
0.0588 | 4.0 | 24 | 0.1415 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for timarni/base_it_hard_2_12
Base model
Qwen/Qwen3-0.6B-Base