unitary-toxic-roberta-onnx

unitary-toxic-roberta-onnx is a toxicity classifier from unitary/unbiased-toxic-roberta, packaged in ONNX format.

The classifier can be used to evaluate toxic content in a prompt or in model output.

Model Description

  • Developed by: unitary
  • Quantized by: llmware
  • Model type: roberta
  • Parameters: 125 million
  • Model Parent: unitary/unbiased-toxic-roberta
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Prompt safety
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmware/unitary-unbiased-toxic-roberta-onnx

Quantized
(3)
this model

Collection including llmware/unitary-unbiased-toxic-roberta-onnx