GTAlign: Game-Theoretic Alignment of LLM Assistants for Mutual Welfare
Abstract
Game-Theoretic Alignment (GTAlign) improves Large Language Model (LLM) performance by integrating game-theoretic decision making into reasoning and training, enhancing efficiency, answer quality, and mutual welfare.
Large Language Models (LLMs) have achieved remarkable progress in reasoning, yet sometimes produce responses that are suboptimal for users in tasks such as writing, information seeking, or providing practical guidance. Conventional alignment practices typically assume that maximizing model reward also maximizes user welfare, but this assumption frequently fails in practice: models may over-clarify or generate overly verbose reasoning when users prefer concise answers. Such behaviors resemble the prisoner's dilemma, where individually rational choices lead to socially suboptimal outcomes. The fundamental challenge is the lack of a principled decision making mechanism that mutually benefits both the LLM and the user. We propose Game-Theoretic Alignment (GTAlign), an alignment framework that integrates game-theoretic decision making into both reasoning and training. During reasoning, the model explicitly treats user-LLM interaction as a strategic game: it constructs payoff matrices within its reasoning chain to estimate welfare for both itself and the user, and then selects actions that are mutually beneficial. During training, we introduce a mutual welfare reward that reinforces cooperative responses, aligning model behavior with socially efficient outcomes. In addition, we introduce an inference technique that leverages game-theoretic reasoning to dynamically adapt LLM's response when pricing policies of LLM service change. Extensive experiments demonstrate that GTAlign substantially improves reasoning efficiency, answer quality, and mutual welfare compared to baselines across diverse tasks. The code is available at https://github.com/ulab-uiuc/GTAlign .
Community
๐ GTAlign: Game-Theoretic Alignment for Mutual Welfare
GTAlign introduces a game-theoretic view of aligning large language models (LLMs).
Instead of maximizing one-sided rewards, it models the userโLLM interaction as a strategic game where both sides aim for cooperation and shared benefit.
๐ฏ Key Idea
GTAlign defines user utility, model utility, and a mutual welfare function that balances the two.
The model learns through reinforcement learning with mutual welfare rewards, then adapts its strategy at inference time by adjusting payoff weights โ no retraining needed.
๐ Results
Outperforms RLHF and other baselines in reasoning, writing, and safety tasks
Improves human satisfaction by +11%
Maintains robustness across unseen domains and pricing conditions
๐ก Why It Matters
GTAlign reframes alignment as cooperative rationality โ building LLMs that reason with users, not just for them.
Code is available at https://github.com/ulab-uiuc/GTAlign. We uploaded checkpoints at https://huggingface.co/GTAlign.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper