Built with Axolotl

See axolotl config

axolotl version: 0.9.2

base_model: Qwen/Qwen3-0.6B-Base
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name

plugins:
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false

chat_template: qwen3
datasets:
  - path: timarni/MNLP_intstruction_tuning # timarni/MNLP_STEM_IT
    type: alpaca
    split: train

shuffle_merged_datasets: true

val_set_size: 0.1
output_dir: ./outputs/base_full_alpaca_big
dataset_prepared_path: last_run_prepared

sequence_len: 4096 #2048
sample_packing: true # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug)
eval_sample_packing: true
pad_to_sequence_len: true
# train_on_inputs: true # NEW
# group_by_length: false NEW?

# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false

wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: base_full_alpaca_big
wandb_log_model:

gradient_accumulation_steps: 16 # 2
micro_batch_size: 2 # 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.00005
# cosine_min_lr_ratio: 0.1

warmup_ratio: 0.05
weight_decay: 0.01

bf16: auto
tf32: true

gradient_checkpointing: offload
gradient_checkpointing_kwargs:
  use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
gradient_clipping: 1.0 # or max_grad_norm?
flash_attention: true

evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 10
special_tokens:

outputs/base_full_alpaca_big

This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base on the timarni/MNLP_intstruction_tuning dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1559

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 64
  • total_eval_batch_size: 4
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 28
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
0.5584 0.0053 1 0.8897
0.1114 0.2513 47 0.1775
0.0949 0.5027 94 0.1683
0.0936 0.7540 141 0.1591
0.0871 1.0053 188 0.1521
0.0646 1.2567 235 0.1481
0.0589 1.5080 282 0.1469
0.0515 1.7594 329 0.1456
0.0536 2.0107 376 0.1438
0.0413 2.2620 423 0.1523
0.0385 2.5134 470 0.1589
0.0401 2.7647 517 0.1559

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.5.1+cu121
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
9
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for timarni/base_full_alpaca_big_376

Finetuned
(283)
this model

Dataset used to train timarni/base_full_alpaca_big_376