Papers
arxiv:2510.08872

GTAlign: Game-Theoretic Alignment of LLM Assistants for Mutual Welfare

Published on Oct 10
ยท Submitted by siqi zhu on Oct 13
Authors:
,
,

Abstract

Game-Theoretic Alignment (GTAlign) improves Large Language Model (LLM) performance by integrating game-theoretic decision making into reasoning and training, enhancing efficiency, answer quality, and mutual welfare.

AI-generated summary

Large Language Models (LLMs) have achieved remarkable progress in reasoning, yet sometimes produce responses that are suboptimal for users in tasks such as writing, information seeking, or providing practical guidance. Conventional alignment practices typically assume that maximizing model reward also maximizes user welfare, but this assumption frequently fails in practice: models may over-clarify or generate overly verbose reasoning when users prefer concise answers. Such behaviors resemble the prisoner's dilemma, where individually rational choices lead to socially suboptimal outcomes. The fundamental challenge is the lack of a principled decision making mechanism that mutually benefits both the LLM and the user. We propose Game-Theoretic Alignment (GTAlign), an alignment framework that integrates game-theoretic decision making into both reasoning and training. During reasoning, the model explicitly treats user-LLM interaction as a strategic game: it constructs payoff matrices within its reasoning chain to estimate welfare for both itself and the user, and then selects actions that are mutually beneficial. During training, we introduce a mutual welfare reward that reinforces cooperative responses, aligning model behavior with socially efficient outcomes. In addition, we introduce an inference technique that leverages game-theoretic reasoning to dynamically adapt LLM's response when pricing policies of LLM service change. Extensive experiments demonstrate that GTAlign substantially improves reasoning efficiency, answer quality, and mutual welfare compared to baselines across diverse tasks. The code is available at https://github.com/ulab-uiuc/GTAlign .

Community

Paper author Paper submitter
โ€ข
edited about 10 hours ago

๐Ÿš€ GTAlign: Game-Theoretic Alignment for Mutual Welfare

GTAlign introduces a game-theoretic view of aligning large language models (LLMs).
Instead of maximizing one-sided rewards, it models the userโ€“LLM interaction as a strategic game where both sides aim for cooperation and shared benefit.

๐ŸŽฏ Key Idea

GTAlign defines user utility, model utility, and a mutual welfare function that balances the two.
The model learns through reinforcement learning with mutual welfare rewards, then adapts its strategy at inference time by adjusting payoff weights โ€” no retraining needed.

๐Ÿ“ˆ Results

Outperforms RLHF and other baselines in reasoning, writing, and safety tasks

Improves human satisfaction by +11%

Maintains robustness across unseen domains and pricing conditions

๐Ÿ’ก Why It Matters

GTAlign reframes alignment as cooperative rationality โ€” building LLMs that reason with users, not just for them.

Code is available at https://github.com/ulab-uiuc/GTAlign. We uploaded checkpoints at https://huggingface.co/GTAlign.

Paper author Paper submitter
This comment has been hidden (marked as Resolved)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.08872 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.08872 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.08872 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.