Model Card for OLMo-Flan
Model Details
Model Description
This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"
Abstract Large language models (LLMs) exhibit cognitive biases -- systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretraining, finetuning, or even random noise due to training stochasticity. We propose a two-step causal experimental approach to disentangle these factors. First, we finetune models multiple times using different random seeds to study how training randomness affects over $30$ cognitive biases. Second, we introduce \emph{cross-tuning} -- swapping instruction datasets between models to isolate bias sources. This swap uses datasets that led to different bias patterns, directly testing whether biases are dataset-dependent. Our findings reveal that while training randomness introduces some variability, biases are mainly shaped by pretraining: models with the same pretrained backbone exhibit more similar bias patterns than those sharing only finetuning data. These insights suggest that understanding biases in finetuned models requires considering their pretraining origins beyond finetuning effects. This perspective can guide future efforts to develop principled strategies for evaluating and mitigating bias in LLMs.
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness. This is one of 3 identical versions trained with different random seeds.
- Model type: Causal decoder-based transformer
- Language(s): English
- License: MIT
- Finetuned from:
allenai/OLMo-7B
- Paper: https://arxiv.org/abs/2507.07186
- Project Page: https://itay1itzhak.github.io/planted-in-pretraining
- Repository: https://github.com/itay1itzhak/planted-in-pretraining
Uses
Direct Use
For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning.
Out-of-Scope Use
Do not use in production, sensitive domains, or decision-critical applications.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("itay1itzhak/OLMo-Flan-Seed-2")
tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/OLMo-Flan-Seed-2")
inputs = tokenizer("Example input?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
- Finetuning method: LoRA (high-rank, rank ∈ [64, 512])
- Instruction data: Flan (350K)
- Seeds: 3 per setting to evaluate randomness effects
- Batch size: 128 (OLMo) / 64 (T5)
- Learning rate: 1e-6 to 1e-3
- Steps: ~5.5k (OLMo) / ~16k (T5)
- Mixed precision: fp16 (OLMo) / bf16 (T5)
Evaluation
- Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024)
- Metrics: mean bias score, PCA clustering, MMLU accuracy
- Findings: Biases primarily originate in pretraining; randomness introduces moderate variation
Environmental Impact
- Hardware: 4× NVIDIA A40
- Estimated time: ~120 GPU hours/model
Technical Specifications
- Architecture: OLMo-7B
- Instruction dataset: Flan (350K)
- Downloads last month
- 13
Model tree for itay1itzhak/OLMo-Flan-Seed-2
Base model
allenai/OLMo-7B