Intermediate Checkpoints Release
For the first time among Korean-targeted LLMs, we’re releasing intermediate checkpoints from the Tri family—0.5B, 1.9B, and 7B—to advance research on LLM training dynamics. We release checkpoints at regular step intervals— ≈20B tokens (0.5B), ≈40B (1.9B), and ≈160B (7B & 70B) —enabling consistent analysis of training dynamics. Each step’s release is distinguished by its branch name. We’re also sharing the 0.5B and 1.9B runs—originally produced for system bring-up but now available as valuable artifacts for analyzing training behavior at smaller scales.
You can browse all intermediate checkpoints here:
- Tri-0.5B → https://huggingface.co/trillionlabs/0.5B-Intermediate-Checkpoints
- Tri-1.9B → https://huggingface.co/trillionlabs/1.9B-Intermediate-Checkpoints
- Tri-7B → https://huggingface.co/trillionlabs/Tri-7B-Intermediate-Checkpoints
- Tri-70B(SFT Preview) → https://huggingface.co/trillionlabs/Tri-70B-Intermediate-Checkpoints
Feel free to check out the full Tri-series collection here:
Dive into the full details—including training configuration and loss curves —on our blog.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
INTERMEDIATE_STEP = "0000020000"
model = AutoModelForCausalLM.from_pretrained('trillionlabs/Tri-70B-Intermediate-Checkpoints', revision=INTERMEDIATE_STEP, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('trillionlabs/Tri-70B-Intermediate-Checkpoints', revision=INTERMEDIATE_STEP, trust_remote_code=True)
...