Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation
Abstract
LLM hacking introduces significant variability and error in social science research, affecting statistical conclusions and requiring rigorous verification and human annotations to mitigate.
Large language models (LLMs) are rapidly transforming social science research by enabling the automation of labor-intensive tasks like data annotation and text analysis. However, LLM outputs vary significantly depending on the implementation choices made by researchers (e.g., model selection, prompting strategy, or temperature settings). Such variation can introduce systematic biases and random errors, which propagate to downstream analyses and cause Type I, Type II, Type S, or Type M errors. We call this LLM hacking. We quantify the risk of LLM hacking by replicating 37 data annotation tasks from 21 published social science research studies with 18 different models. Analyzing 13 million LLM labels, we test 2,361 realistic hypotheses to measure how plausible researcher choices affect statistical conclusions. We find incorrect conclusions based on LLM-annotated data in approximately one in three hypotheses for state-of-the-art models, and in half the hypotheses for small language models. While our findings show that higher task performance and better general model capabilities reduce LLM hacking risk, even highly accurate models do not completely eliminate it. The risk of LLM hacking decreases as effect sizes increase, indicating the need for more rigorous verification of findings near significance thresholds. Our extensive analysis of LLM hacking mitigation techniques emphasizes the importance of human annotations in reducing false positive findings and improving model selection. Surprisingly, common regression estimator correction techniques are largely ineffective in reducing LLM hacking risk, as they heavily trade off Type I vs. Type II errors. Beyond accidental errors, we find that intentional LLM hacking is unacceptably simple. With few LLMs and just a handful of prompt paraphrases, anything can be presented as statistically significant.
Community
We show that using LLMs as data annotators, you can produce any scientific result you want. We call this LLM Hacking.
More teasers & TL;DR on social media:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Curse of Knowledge: When Complex Evaluation Context Benefits yet Biases LLM Judges (2025)
- Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models (2025)
- Am I Blue or Is My Hobby Counting Teardrops? Expression Leakage in Large Language Models as a Symptom of Irrelevancy Disruption (2025)
- PRvL: Quantifying the Capabilities and Risks of Large Language Models for PII Redaction (2025)
- Uncovering the Fragility of Trustworthy LLMs through Chinese Textual Ambiguity (2025)
- Beyond Human Judgment: A Bayesian Evaluation of LLMs'Moral Values Understanding (2025)
- Are Humans as Brittle as Large Language Models? (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper