Papers
arxiv:2510.26707

Value Drifts: Tracing Value Alignment During LLM Post-Training

Published on Oct 30
· Submitted by Mehar Bhatia on Nov 3
Authors:
,
,
,
,
,
,

Abstract

Research investigates how and when value alignment occurs during the post-training phase of LLMs, finding that supervised fine-tuning establishes values, while preference optimization has limited impact.

AI-generated summary

As LLMs occupy an increasingly important role in society, they are more and more confronted with questions that require them not only to draw on their general knowledge but also to align with certain human value systems. Therefore, studying the alignment of LLMs with human values has become a crucial field of inquiry. Prior work, however, mostly focuses on evaluating the alignment of fully trained models, overlooking the training dynamics by which models learn to express human values. In this work, we investigate how and at which stage value alignment arises during the course of a model's post-training. Our analysis disentangles the effects of post-training algorithms and datasets, measuring both the magnitude and time of value drifts during training. Experimenting with Llama-3 and Qwen-3 models of different sizes and popular supervised fine-tuning (SFT) and preference optimization datasets and algorithms, we find that the SFT phase generally establishes a model's values, and subsequent preference optimization rarely re-aligns these values. Furthermore, using a synthetic preference dataset that enables controlled manipulation of values, we find that different preference optimization algorithms lead to different value alignment outcomes, even when preference data is held constant. Our findings provide actionable insights into how values are learned during post-training and help to inform data curation, as well as the selection of models and algorithms for preference optimization to improve model alignment to human values.

Community

Paper submitter

How do LLMs acquire human values?

We often point to preference optimization. In our new work, we trace how and when model values shift during post-training and find surprising dynamics.

We ask: How do data, algorithms, and their interaction shape model values?

value-drifts-gif-crop

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.26707 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.26707 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.