Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
2
25
11
Isabelle Augenstein
IAugenstein
Follow
matheusalb's profile picture
21world's profile picture
MartaMarchiori's profile picture
6 followers
·
3 following
http://isabelleaugenstein.github.io/
IAugenstein
isabelleaugenstein
AI & ML interests
Low-resource learning, natural language understanding, fact checking, explainable AI
Recent Activity
authored
a paper
about 1 hour ago
Unstructured Evidence Attribution for Long Context Query Focused Summarization
reacted
to
frimelle
's
post
with ❤️
1 day ago
What’s in a name? More than you might think, especially for AI. Whenever I introduce myself, people often start speaking French to me, even though my French is très basic. It turns out that AI systems do something similar: Large language models infer cultural identity from names, shaping their responses based on presumed backgrounds. But is this helpful personalization or a reinforcement of stereotypes? In our latest paper, we explored this question by testing DeepSeek, Llama, Aya, Mistral-Nemo, and GPT-4o-mini on how they associate names with cultural identities. We analysed 900 names from 30 cultures and found strong assumptions baked into AI responses: some cultures were overrepresented, while others barely registered. For example, a name like "Jun" often triggered Japan-related responses, while "Carlos" was linked primarily to Mexico, even though these names exist in multiple countries. Meanwhile, names from places like Ireland led to more generic answers, suggesting weaker associations in the training data. This has real implications for AI fairness: How should AI systems personalize without stereotyping? Should they adapt at all based on a name? Work with some of my favourite researchers: @sidicity Arnav Arora and @IAugenstein Read the full paper here: https://huggingface.co/papers/2502.11995
reacted
to
frimelle
's
post
with 🔥
1 day ago
What’s in a name? More than you might think, especially for AI. Whenever I introduce myself, people often start speaking French to me, even though my French is très basic. It turns out that AI systems do something similar: Large language models infer cultural identity from names, shaping their responses based on presumed backgrounds. But is this helpful personalization or a reinforcement of stereotypes? In our latest paper, we explored this question by testing DeepSeek, Llama, Aya, Mistral-Nemo, and GPT-4o-mini on how they associate names with cultural identities. We analysed 900 names from 30 cultures and found strong assumptions baked into AI responses: some cultures were overrepresented, while others barely registered. For example, a name like "Jun" often triggered Japan-related responses, while "Carlos" was linked primarily to Mexico, even though these names exist in multiple countries. Meanwhile, names from places like Ireland led to more generic answers, suggesting weaker associations in the training data. This has real implications for AI fairness: How should AI systems personalize without stereotyping? Should they adapt at all based on a name? Work with some of my favourite researchers: @sidicity Arnav Arora and @IAugenstein Read the full paper here: https://huggingface.co/papers/2502.11995
View all activity
Organizations
Papers
26
arxiv:
2502.14409
arxiv:
2502.11995
arxiv:
2502.09083
arxiv:
2412.17031
Expand 26 papers
models
None public yet
datasets
None public yet