PersonaFeedback: A Large-scale Human-annotated Benchmark For Personalization
Abstract
A new benchmark, PersonaFeedback, evaluates Large Language Models' ability to generate personalized responses given explicit user personas, revealing limitations in current systems.
With the rapid improvement in the general capabilities of LLMs, LLM personalization, i.e., how to build LLM systems that can generate personalized responses or services that are tailored to distinct user personas, has become an increasingly important research and engineering problem. However, unlike many new challenging benchmarks being released for evaluating the general/reasoning capabilities, the lack of high-quality benchmarks for evaluating LLM personalization greatly hinders progress in this field. To address this, we introduce PersonaFeedback, a new benchmark that directly evaluates LLMs' ability to provide personalized responses given pre-defined user personas and queries. Unlike existing benchmarks that require models to infer implicit user personas from historical interactions, PersonaFeedback decouples persona inference from personalization, focusing on evaluating the model's ability to generate responses tailored to explicit personas. PersonaFeedback consists of 8298 human-annotated test cases, which are categorized into easy, medium, and hard tiers based on the contextual complexity of the user personas and the difficulty in distinguishing subtle differences between two personalized responses. We conduct comprehensive evaluations across a wide range of models. The empirical results reveal that even state-of-the-art LLMs that can solve complex real-world reasoning tasks could fall short on the hard tier of PersonaFeedback where even human evaluators may find the distinctions challenging. Furthermore, we conduct an in-depth analysis of failure modes across various types of systems, demonstrating that the current retrieval-augmented framework should not be seen as a de facto solution for personalization tasks. All benchmark data, annotation protocols, and the evaluation pipeline will be publicly available to facilitate future research on LLM personalization.
Community
With the rapid improvement in the general capabilities of LLMs, LLM personalization, i.e., how to
build LLM systems that can generate personalized responses or services that are tailored to distinct
user personas, has become an increasingly important research and engineering problem. However,
unlike many new challenging benchmarks being released for evaluating the general/reasoning
capabilities, the lack of high-quality benchmarks for evaluating LLM personalization greatly hinders
progress in this field. To address this, we introduce PersonaFeedback, a new benchmark that
directly evaluates LLMs’ ability to provide personalized responses given pre-defined user personas
and queries. Unlike existing benchmarks that require models to infer implicit user personas from
historical interactions, PersonaFeedback decouples persona inference from personalization,
focusing on evaluating the model’s ability to generate responses tailored to explicit personas.
PersonaFeedback consists of 8298 human-annotated test cases, which are categorized into
easy, medium, and hard tiers based on the contextual complexity of the user personas and the
difficulty in distinguishing subtle differences between two personalized responses. We conduct
comprehensive evaluations across a wide range of models. The empirical results reveal that even
state-of-the-art LLMs that can solve complex real-world reasoning tasks could fall short on the hard
tier of PersonaFeedback where even human evaluators may find the distinctions challenging.
Furthermore, we conduct an in-depth analysis of failure modes across various types of systems,
demonstrating that the current retrieval-augmented framework should not be seen as a de facto
solution for personalization tasks. All benchmark data, annotation protocols, and the evaluation
pipeline will be publicly available to facilitate future research on LLM personalization.
Dataset: https://huggingface.co/datasets/PersonalAILab/PersonaFeedback
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PersonaLens: A Benchmark for Personalization Evaluation in Conversational AI Assistants (2025)
- Exploring the Potential of LLMs as Personalized Assistants: Dataset, Evaluation, and Analysis (2025)
- LaMP-QA: A Benchmark for Personalized Long-form Question Answering (2025)
- Teaching Language Models to Evolve with Users: Dynamic Profile Modeling for Personalized Alignment (2025)
- WikiPersonas: What Can We Learn From Personalized Alignment to Famous People? (2025)
- LLMs Think, But Not In Your Flow: Reasoning-Level Personalization for Black-Box Large Language Models (2025)
- SynthesizeMe! Inducing Persona-Guided Prompts for Personalized Reward Models in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper