sbordt's picture
Update README.md
94b8477 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - Research

Model Card for OLMo-2-1B-Decayed-Early

This model is a research variant of OLMo-2-0425-1B. The model serves as a baseline for comparisons with OLMo-2-1B-Exp.

This model was obtained by linearly decaying the learning rate of the OLMo-2-0425-1B checkpoint at gradient step 90.000 to zero over 10.000 gradient steps.

The model is described in the paper "Train Once, Answer All: Many Pretraining Experiments for the Cost of One".

Note: This is the model that is named OLMo-2-1B in the paper. To avoid confusion with the fully trained OLMo-2-1B base model, it is named differently on Huggingface.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("sbordt/OLMo-2-1B-Decayed-Early")
tokenizer = AutoTokenizer.from_pretrained("sbordt/OLMo-2-1B-Decayed-Early")