Built with Axolotl

See axolotl config

axolotl version: 0.9.2

######################################
#  CONTINUED PRE-TRAINING EXAMPLE    #
######################################

base_model: Qwen/Qwen3-0.6B-Base
strict: false

# ––– PRE-TRAIN DATA –––
pretraining_dataset:
  - path: timarni/pretrain-textbooks
    type: completion
  - path: timarni/pretrain-wikipedia
    type: completion

shuffle_merged_datasets: true

chat_template: null

# ––– SEQ LEN & PACKING –––
sequence_len: 4096
sample_packing: true
# eval_sample_packing: true # false
pad_to_sequence_len: true
# eval_pad_to_max_length: false

# ––– TRAINING BUDGET –––
micro_batch_size: 4
gradient_accumulation_steps: 4
max_steps: 1500

# ––– OPTIMISER –––
learning_rate: 5e-6
lr_scheduler: cosine
warmup_steps: 400
weight_decay: 0.01
optimizer: adamw_torch

# ––– PRECISION / SPEED –––
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true

# # ––– EVALUATION –––
# do_bench_eval: false          # we handle eval via test_datasets
# test_datasets:                # ← plural!
#   - path: ./datasets/mmlu_val_all.jsonl   # <β€” your converted file
#     ds_type: json
#     split: train               # the default split Hugging Face gives local JSONL
#     type: explainchoice # mmlu_mcqa        # explainchoice
#     field_question: question   # these three lines are defaults, but
#     field_choices: choices     # you can leave them out if you matched the keys
#     field_solution: solution

# # eval_batch_size: 1
# eval_steps: 500
# metric_for_best_model: accuracy   # expose β€œaccuracy” coming from explainchoice
# greater_is_better: true
# eval_strategy:


# ––– OUTPUT / LOGGING –––
save_steps: 150
save_total_limit: 15
output_dir: ./outputs/qwen3_pretraining_full_2

wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_name: qwen3-0.6B-pretraining_full_2

outputs/qwen3_pretraining_full_2

This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 400
  • training_steps: 1500

Training results

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.5.1+cu121
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
8
Safetensors
Model size
596M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for timarni/qwen3_pretraining_full_2_300

Finetuned
(313)
this model
Finetunes
2 models