Papers
arxiv:2407.10853

An Actionable Framework for Assessing Bias and Fairness in Large Language Model Use Cases

Published on Jul 15, 2024
Authors:

Abstract

A decision framework is proposed to assess bias and fairness in large language models by evaluating use cases, incorporating both prompt-specific and model-specific risks, and providing practical evaluation metrics.

AI-generated summary

Large language models (LLMs) can exhibit bias in a variety of ways. Such biases can create or exacerbate unfair outcomes for certain groups within a protected attribute, including, but not limited to sex, race, sexual orientation, or age. In this paper, we propose a decision framework that allows practitioners to determine which bias and fairness metrics to use for a specific LLM use case. To establish the framework, we define bias and fairness risks for LLMs, map those risks to a taxonomy of LLM use cases, and then define various metrics to assess each type of risk. Instead of focusing solely on the model itself, we account for both prompt-specific- and model-specific-risk by defining evaluations at the level of an LLM use case, characterized by a model and a population of prompts. Furthermore, because all of the evaluation metrics are calculated solely using the LLM output, our proposed framework is highly practical and easily actionable for practitioners. For streamlined implementation, all evaluation metrics included in the framework are offered in this paper's companion Python toolkit, LangFair. Finally, our experiments demonstrate substantial variation in bias and fairness across use cases, underscoring the importance of use-case-level assessments.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.10853 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.10853 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.10853 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.