Safetensors

Model Summary

GEMMACITIZEN-12B is a toxicity detection model finetuned from Gemma-3-12B-IT on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.

Usage

PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.

CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""

Citation

@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
      title={ModelCitizens:Representing Community Voices in Online Safety}, 
      author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
      year={2025},
      eprint={2507.05455},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.05455}, 
}
Downloads last month
2
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for modelcitizens/GEMMACITIZEN-12B

Finetuned
(94)
this model

Dataset used to train modelcitizens/GEMMACITIZEN-12B