Model Card for s-nlp/bart-large-pseudoparadetox-llama3-70b-10shot

Model Details

This model is a BART-large sequence-to-sequence model fine-tuned for English Text Detoxification (style transfer from toxic to neutral). It was trained on the PseudoParaDetox dataset, which was synthetically generated using the Llama 3 70B Instruct LLM in a 10-shot setting, leveraging Activation Patching to bypass safety alignment during the data generation phase.

The resulting detoxification model demonstrates high fluency and content preservation scores, outperforming models trained on the original human-annotated ParaDetox dataset in manual human evaluation.

  • Developed by: Daniil Moskovskiy, Sergey Pletenev, and Alexander Panchenko
  • Model type: Encoder-Decoder (BART-large)
  • Language(s) (NLP): English (en)
  • License: OpenRAIL++
  • Finetuned from model: facebook/bart-large

Model Sources [optional]

  • Repository (Code & Data): https://github.com/s-nlp/pseudoparadetox
  • Paper: "LLMs to Replace Crowdsourcing For Parallel Data Creation? The Case of Text Detoxification" (Moskovskiy, Pletenev, & Panchenko, EMNLP 2024)

Uses

Direct Use

The model is intended for the automatic rewriting of toxic, offensive, or rude English input text into a polite or neutral tone, while maintaining the original semantic meaning and fluency.

  • Filtering text for online forums or social media.
  • Enabling polite response generation in conversational agents.
  • Assisting users in editing drafted messages to be more respectful.

Downstream Use [optional]

This model can be integrated into larger content moderation pipelines, specifically handling the mitigation step after toxicity detection.

Out-of-Scope Use

The model is trained strictly for English detoxification. It should not be used for:

  • Generating original, creative content (it is a rewriting model).
  • Detoxification in languages other than English.
  • Censoring legitimate political or critical commentary that does not violate toxicity guidelines (though this is a risk inherent to all detoxification systems).

Bias, Risks, and Limitations

This model inherits typical limitations of text style transfer models:

  • Semantic Loss (Over-Sanitization): In attempts to fully remove toxicity, the model may sometimes alter or dilute the core meaning of the original statement, especially when dealing with complex or subtle insults.
  • Inherent Bias: The base BART model and the LLM used for data generation (Llama 3) carry pre-existing biases, which may manifest as inconsistent detoxification quality across different demographics or topics.
  • Data Generation Risk: The training data was created using an Activation Patching technique to bypass the Llama 3's safety alignment. While this was necessary for generating high-quality parallel data, users should be aware that the training data distribution might reflect content that human annotators typically refuse to handle.

Recommendations

We recommend systematic testing on new domain data before deployment. Users should implement a post-processing toxicity classifier to confirm that the detoxified output is truly non-toxic, as detoxification is not guaranteed in 100% of cases.

How to Get Started with the Model

Use the code below to get started with the model:

from transformers import pipeline

# Note: Replace 'Model ID' with the actual Hugging Face path once available
detoxifier = pipeline("text2text-generation", model="s-nlp/bart-large-pseudoparadetox-llama3-70b-10shot")

toxic_text = "You are dumb idiot!"

result = detoxifier(toxic_text, max_length=128)
print(result[0]['generated_text'])
# Expected Output: You are wrong! (or similar neutral rewrite)

Training Details

Training Data

The model was fine-tuned on PseudoParaDetox, a synthetic dataset generated by the Llama 3 70B Instruct model. The source texts were derived from the toxic side of the ParaDetox corpus. The generation utilized a 10-shot prompt setup to guide the LLM's rewriting process. This generation was facilitated by an Activation Patching technique to prevent the LLM from refusing to generate detoxified output for highly toxic inputs.

Training Procedure

The BART-large model was fine-tuned using the following key parameters:

Training Hyperparameters

  • Optimizer: AdamW
  • Learning Rate: 0.00005
  • Batch Size: 32 (with 1 gradient accumulation step)
  • Epochs: 5
  • Precision: bfloat16

Evaluation

The final model performance was assessed using a combination of automatic metrics (on the ParaDetox test set) and side-by-side comparisons using GPT-4o, followed by manual human evaluation.

Testing Data, Factors & Metrics

Testing Data

ParaDetox private test split (671 texts).

Metrics

  • Style Transfer Accuracy (STA): Measured by a RoBERTa-based toxicity classifier. (Higher is better, reflecting successful toxicity removal.)
  • Semantic Similarity (SIM): Measured by BLEURT score between the original and detoxified text.
  • Fluency (FL): Measured by a RoBERTa-based linguistic acceptability classifier (CoLA-trained).
  • Joint Score (J): The geometric mean of STA, SIM, and FL.

Results

The results below reflect the performance of BART fine-tuned on the Llama 3 70B A.P. 10-shot data, compared against the baseline BART trained on the original human-annotated ParaDetox data.

Metric BART (Original ParaDetox) BART (PseudoParaDetox 70B AP 10-shot)
STA (Auto) 0.876 0.842
SIM (Auto) 0.616 0.594
FL (Auto) 0.824 0.866
Joint Score (J) 0.444 0.434
Manual J Score 0.661 0.762
GPT-4o Win Rate vs. Baseline - 65%

Summary

While automatic metrics show comparable performance, the superior Manual Joint Score (J=0.762) and high GPT-4o side-by-side win rate (65%) indicate that the data generated using the patched LLM results in subjectively higher-quality detoxification compared to the original crowdsourced data.

Citation [optional]

BibTeX:

@inproceedings{moskovskiy-etal-2024-llms,
    title = "{LLM}s to Replace Crowdsourcing For Parallel Data Creation? The Case of Text Detoxification",
    author = "Moskovskiy, Daniil  and
      Pletenev, Sergey  and
      Panchenko, Alexander",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.839/",
    doi = "10.18653/v1/2024.findings-emnlp.839",
    pages = "14361--14373",
    abstract = "The lack of high-quality training data remains a significant challenge in NLP. Manual annotation methods, such as crowdsourcing, are costly, require intricate task design skills, and, if used incorrectly, may result in poor data quality. From the other hand, LLMs have demonstrated proficiency in many NLP tasks, including zero-shot and few-shot data annotation. However, they often struggle with text detoxification due to alignment constraints and fail to generate the required detoxified text. This work explores the potential of modern open source LLMs to annotate parallel data for text detoxification. Using the recent technique of activation patching, we generate a pseudo-parallel detoxification dataset based on ParaDetox. The detoxification model trained on our generated data shows comparable performance to the original dataset in automatic detoxification evaluation metrics and superior quality in manual evaluation and side-by-side comparisons."
}

APA: Moskovskiy, D., Pletenev, S., & Panchenko, A. (2024, November). Llms to replace crowdsourcing for parallel data creation? the case of text detoxification. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 14361-14373).

Downloads last month
24
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for s-nlp/bart_large_pseudoparadetox_llama3_70b_10shot_patched

Finetuned
(169)
this model

Dataset used to train s-nlp/bart_large_pseudoparadetox_llama3_70b_10shot_patched

Collection including s-nlp/bart_large_pseudoparadetox_llama3_70b_10shot_patched