Edit model card

This model is the toxicity classifier used in the paper Self-Detoxifying Language Models via Toxification Reversal.

We did not use the Perspective API to assess the toxicity of newly generated text due to its limitations on request throughput. Instead, we trained an offline toxicity scorer on 90k RTP samples not used for evaluation to improve efficiency. Specifically, we fine-tuned a DeBERTa-v3-large (He et al., 2023) model to fit the original API’s toxicity probabilities by minimizing the KL divergence. This fine-tuned model achieved 94.87% accuracy and a 98.54% AUROC score on the hold-out 10k subset, which indicates that it can effectively estimate text toxicity as a substitute for the API. With this accurate estimation performance guarantee, the model has a much higher throughput than the API, i.e., 27,000 samples per second versus typically 25 queries per second using the API.

Downloads last month
410
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.