Papers
arxiv:2509.08358

<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs

Published on Sep 10
· Submitted by Sergey Pletenev on Sep 11

Abstract

Models fine-tuned on synthetic toxic data generated by LLMs perform worse than those trained on human data due to a lexical diversity gap in the synthetic content.

AI-generated summary

Modern Large Language Models (LLMs) are excellent at generating synthetic data. However, their performance in sensitive domains such as text detoxification has not received proper attention from the scientific community. This paper explores the possibility of using LLM-generated synthetic toxic data as an alternative to human-generated data for training models for detoxification. Using Llama 3 and Qwen activation-patched models, we generated synthetic toxic counterparts for neutral texts from ParaDetox and SST-2 datasets. Our experiments show that models fine-tuned on synthetic data consistently perform worse than those trained on human data, with a drop in performance of up to 30% in joint metrics. The root cause is identified as a critical lexical diversity gap: LLMs generate toxic content using a small, repetitive vocabulary of insults that fails to capture the nuances and variety of human toxicity. These findings highlight the limitations of current LLMs in this domain and emphasize the continued importance of diverse, human-annotated data for building robust detoxification systems.

Community

Paper author Paper submitter

image_2025-09-11_13-18-50_x3.png
Main findings:

  • Human annotation remains crucial for sensitive domains
  • Synthetic data alone creates ineffective detoxification systems
  • Risk of deploying models that fail on real-world toxicity
  • Critical lexical diversity gap - repetitive, limited vocabulary in synthetic toxic texts

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.08358 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.08358 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.08358 in a Space README.md to link it from this page.

Collections including this paper 1