Safetensors

Model Summary

LLAMACITIZEN-8B is a toxicity detection model finetuned from LLaMA-3.1-8B-Instruct on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.

Usage

PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.

CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""

Citation

@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
      title={ModelCitizens:Representing Community Voices in Online Safety}, 
      author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
      year={2025},
      eprint={2507.05455},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.05455}, 
}
Downloads last month
4
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for modelcitizens/LLAMACITIZEN-8B

Finetuned
(1686)
this model

Dataset used to train modelcitizens/LLAMACITIZEN-8B