Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: facebook/opt-125m
batch_size: 32
bf16: true
chat_template: tokenizer_default_fallback_alpaca
datasets:
- format: custom
  path: jamescalam/ai-arxiv-chunked
  type:
    field_instruction: chunk
    field_output: summary
    format: '{instruction}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
eval_steps: 20
flash_attention: true
gpu_memory_limit: 80GiB
gradient_checkpointing: true
group_by_length: true
learning_rate: 0.0002
logging_steps: 10
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 4
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: /workspace/axolotl/configs
pad_to_sequence_len: true
resize_token_embeddings_to_32x: false
sample_packing: false
save_steps: 40
save_total_limit: 1
sequence_len: 2048
tokenizer_type: GPT2TokenizerFast
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.1
wandb_entity: ''
wandb_mode: offline
wandb_name: facebook/opt-125m-jamescalam/ai-arxiv-chunked
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: default
warmup_ratio: 0.05
xformers_attention: true

workspace/axolotl/configs

This model is a fine-tuned version of facebook/opt-125m on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4806

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50

Training results

Training Loss Epoch Step Validation Loss
No log 0.0001 1 3.6188
3.3668 0.0021 20 3.5959
3.3143 0.0043 40 3.4806

Framework versions

  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
4
Safetensors
Model size
125M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jwongTensora/test-repo

Base model

facebook/opt-125m
Finetuned
(98)
this model