--- library_name: transformers license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - axolotl - generated_from_trainer model-index: - name: Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.5.2` ```yaml base_model: mistralai/Mistral-7B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false resize_token_embeddings_to_32x: false flash_attention: true xformers_attention: load_in_8bit: false load_in_4bit: false strict: false datasets: - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096 train_on_split: train type: completion test_datasets: - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K split: test type: completion is_preprocess: true skip_prepare_dataset: true dataset_prepared_path: hf_use_auth_token: true output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re resume_from_checkpoint: auto_resume_from_checkpoints: true sequence_len: 4096 sample_packing: true sample_packing_group_size: 100000 sample_packing_bin_size: 200 pad_to_sequence_len: true eval_sample_packing: false # eval_causal_lm_metrics: ["perplexity"] wandb_project: "sparse-tuning-cpt" wandb_entity: wandb_watch: wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re" wandb_log_model: # global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M gradient_accumulation_steps: 2 micro_batch_size: 8 eval_batch_size: 1 max_steps: 10000 optimizer: adamw_torch learning_rate: 0.00005 lr_scheduler: cosine cosine_min_lr_ratio: 0.2 weight_decay: 0.01 adam_beta1: 0.9 adam_beta2: 0.95 adam_eps: 0.000001 max_grad_norm: 2.0 train_on_inputs: false group_by_length: false bf16: true fp16: tf32: false hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re" save_strategy: "steps" save_steps: 500 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: local_rank: logging_steps: 1 warmup_steps: 375 eval_steps: 500 eval_table_size: debug: deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json fsdp: fsdp_config: seed: 42 ```

# Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 64 - gradient_accumulation_steps: 2 - total_train_batch_size: 1024 - total_eval_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 375 - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 11.1526 | 0.0001 | 1 | 11.1178 | | 3.9513 | 0.0408 | 500 | 3.7699 | | 3.4469 | 0.0817 | 1000 | 3.2772 | | 3.1993 | 0.1225 | 1500 | 3.0024 | | 2.8081 | 0.1633 | 2000 | 2.7218 | | 2.5217 | 0.2042 | 2500 | 2.4860 | | 2.3993 | 0.2450 | 3000 | 2.3570 | | 2.2919 | 0.2858 | 3500 | 2.2761 | | 2.2379 | 0.3267 | 4000 | 2.2180 | | 2.2047 | 0.3675 | 4500 | 2.1721 | | 2.1553 | 0.4083 | 5000 | 2.1367 | | 2.1279 | 0.4491 | 5500 | 2.1066 | | 2.0689 | 0.4900 | 6000 | 2.0822 | | 2.0702 | 0.5308 | 6500 | 2.0608 | | 2.0611 | 0.5716 | 7000 | 2.0425 | | 2.0242 | 0.6125 | 7500 | 2.0264 | | 2.0449 | 0.6533 | 8000 | 2.0140 | | 2.0245 | 0.6941 | 8500 | 2.0025 | | 2.0107 | 0.7350 | 9000 | 1.9933 | | 1.9995 | 0.7758 | 9500 | 1.9851 | | 1.9995 | 0.8166 | 10000 | 1.9784 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3