Model Summary
GEMMACITIZEN-12B is a toxicity detection model finetuned from Gemma-3-12B-IT on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.
Repository: asuvarna31/modelcitizens
Usage
PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.
CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""
Citation
@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
title={ModelCitizens:Representing Community Voices in Online Safety},
author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
year={2025},
eprint={2507.05455},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.05455},
}
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support