SmalLM


SmalLM is a series of small transformer models built from scratch for language modeling. This project is designed to explore innovative approaches to transformer architectures through modular pipelines for pretraining, fine-tuning, and alignment.

Uses

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Azrail/smallm_70")
model = AutoModelForCausalLM.from_pretrained("Azrail/smallm_70", trust_remote_code=True)
inputs = tokenizer("How are you?", return_tensors="pt")

out = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(out))

Model Details**

Key Features:

  1. Grouped Query Attention (GQA).

  2. Mixture-of-Experts with auxiliary loss-free balancing.

  3. ALiBi (Attention with Linear Biases) or Rotary Position Embedding (RoPE).

  4. NTK-by-parts RoPE interpolation for extends context length.

Pre-Training:

Model Training Data Steps Content Length Tokens LR Batch Size Precision
SmalLM-70M smollm-corpus 70k 1024 18B 1e-3 0.25M bfloat16
SmalLM-150M smollm-corpus - 1024 - - - bfloat16
SmalLM-350M smollm-corpus - 1024 - - - bfloat16
SmalLM-500M smollm-corpus - 1024 - - - bfloat16

Evaluation: Evaluation runing with lm-evaluation-harness

Model MMLU ARC easy/hard PIQA HellaSwag OBQA Winogrande
SmalLM-70M 25.33 51.47/25.68 61.75 30.31 30.8 50.83
SmalLM-150M - - - - - -
SmalLM-350M - - - - - -
SmalLM-500M - - - - - -

Procedure:

Visualize in Weights & Biases

Framework versions

  • Transformers 4.50.3
  • Pytorch 2.6.0+cu126
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
42
Safetensors
Model size
75.3M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Azrail/smallm_70

Finetunes
1 model

Collection including Azrail/smallm_70