NOT FOR PUBLIC USE

This is only public so we can use it with a merging system that doesn't have access to the org.

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

# huggingface-cli login --token $hf_key && wandb login $wandb_key
# python -m axolotl.cli.preprocess ms-adventure.yml
# accelerate launch -m axolotl.cli.train ms-adventure.yml
# python -m axolotl.cli.merge_lora ms-adventure.yml

base_model: unsloth/Mistral-Small-Instruct-2409
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false
sequence_len: 16384 # 99% vram
min_sample_len: 128
bf16: true
fp16:
tf32: false
flash_attention: true
special_tokens:

# Data
dataset_prepared_path: last_run_prepared
datasets:
  - path: botmall/rosier-inf-split-16k
    type: completion
warmup_steps: 20
shuffle_merged_datasets: true

save_safetensors: true

mlflow_tracking_uri: http://127.0.0.1:7860
mlflow_experiment_name: Default
# WandB
#wandb_project: Mistral-Small-Skein
#wandb_entity:

# Iterations
num_epochs: 1

# Output
output_dir: ./ms-fujin
hub_model_id: BeaverAI/mistral-small-fujin-qlora
hub_strategy: "checkpoint"

# Sampling
sample_packing: true
pad_to_sequence_len: true

# Batching
gradient_accumulation_steps: 1
micro_batch_size: 2
eval_batch_size: 2
gradient_checkpointing: 'unsloth'
gradient_checkpointing_kwargs:
   use_reentrant: true

unsloth_cross_entropy_loss: true
#unsloth_lora_mlp: true
#unsloth_lora_qkv: true
#unsloth_lora_o: true

# Evaluation
val_set_size: 100
evals_per_epoch: 5
eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: false

# LoRA
adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.125
lora_target_linear: 
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj
lora_modules_to_save:

# Optimizer
optimizer: paged_adamw_8bit # adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0001
cosine_min_lr_ratio: 0.1
weight_decay: 0.01
max_grad_norm: 1.0

# Misc
train_on_inputs: false
group_by_length: false
early_stopping_patience:
local_rank:
logging_steps: 1
xformers_attention:
debug:
deepspeed: deepspeed_configs/zero3.json # previously blank
fsdp:
fsdp_config:

# Checkpoints
resume_from_checkpoint:
saves_per_epoch: 5

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

mistral-small-fujin-qlora

This model is a fine-tuned version of unsloth/Mistral-Small-Instruct-2409 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.9557 0.0031 1 2.6437
1.8648 0.2025 66 2.6013
1.9514 0.4049 132 2.5771
1.9213 0.6074 198 2.5940
1.9094 0.8098 264 2.5938

Framework versions

  • PEFT 0.13.0
  • Transformers 4.45.1
  • Pytorch 2.3.1
  • Datasets 2.21.0
  • Tokenizers 0.20.0
Downloads last month
47
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for Alfitaria/mistral-small-fujin-qlora