File size: 7,774 Bytes
a1511dc 94c0da5 7a7fa5e 94c0da5 7b11d0a bf36993 a1511dc bc3eb8f f8e19b1 7b11d0a 19377f8 7b11d0a 7783713 a1511dc 94c0da5 82ba2b0 94c0da5 f8e19b1 82ba2b0 20ad3ab f8e19b1 2c339a1 54ef68c 04394ac 54ef68c 04394ac 6c6fa4d f8e19b1 cb6b5d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
---
license: apache-2.0
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- Content Safety
- Mature
- Content
- Detection
- Enticing
- Sensual
- Neutral
- Anime
- ViT
- multi-label
- digital spaces
- SigLIP2
- vision-language encoder
- single-label
- '2e-4'
---

# **Mature-Content-Detection**
> **Mature-Content-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into various mature or neutral content categories using the **SiglipForImageClassification** architecture.
> [!Note]
> Use this model to support positive, safe, and respectful digital spaces. Misuse is strongly discouraged and may violate platform or regional policies. This model doesn't generate any unsafe content, as it is a classification model and does not fall under the category of models not suitable for all audiences.
> [!Important]
> Neutral = Safe / Normal
```py
Classification Report:
precision recall f1-score support
Anime Picture 0.8130 0.8066 0.8098 5600
Hentai 0.8317 0.8134 0.8224 4180
Neutral 0.8344 0.7785 0.8055 5503
Pornography 0.9161 0.8464 0.8799 5600
Enticing or Sensual 0.7699 0.8979 0.8290 5600
accuracy 0.8296 26483
macro avg 0.8330 0.8286 0.8293 26483
weighted avg 0.8331 0.8296 0.8298 26483
```
---

---
The model categorizes images into five classes:
- **Class 0:** Anime Picture
- **Class 1:** Hentai
- **Class 2:** Neutral
- **Class 3:** Pornography
- **Class 4:** Enticing or Sensual
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Mature-Content-Detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated labels
labels = {
"0": "Anime Picture",
"1": "Hentai",
"2": "Neutral",
"3": "Pornography",
"4": "Enticing or Sensual"
}
def mature_content_detection(image):
"""Predicts the type of content in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=mature_content_detection,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Mature Content Detection",
description="Upload an image to classify whether it contains anime, hentai, neutral, pornographic, or enticing/sensual content."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Guidelines for Use Mature-Content-Detection**
The **Mature-Content-Detection** model is a computer vision classifier designed to detect and categorize adult-themed and anime-based content. It supports responsible content moderation and filtering across digital platforms. To ensure the ethical and intended use of the model, please follow the guidelines below:
# **Recommended Use Cases**
- **Content Moderation:** Automatically filter explicit or suggestive content in online communities, forums, or media-sharing platforms.
- **Parental Controls:** Enable content-safe environments for children by flagging inappropriate images.
- **Dataset Curation:** Clean and label image datasets for safe and compliant ML training.
- **Digital Wellbeing:** Assist in building safer AI and web experiences by identifying sensitive media content.
- **Search Engine Filtering:** Improve content relevance and safety in image-based search results.
# **Prohibited / Discouraged Use**
- **Malicious Intent:** Do not use the model to harass, shame, expose, or target individuals or communities.
- **Invasion of Privacy:** Avoid deploying the model on private or sensitive user data without proper consent.
- **Illegal Activities:** Never use the model for generating, distributing, or flagging illegal content.
- **Bias Amplification:** Do not rely solely on this model to make sensitive moderation decisions. Always include human oversight, especially where reputational or legal consequences are involved.
- **Manipulation or Misrepresentation:** Avoid using this model to manipulate or misrepresent content classification in unethical ways.
# **Important Notes**
- This model works best on **anime and adult content** images. It is **not designed for general images** or unrelated categories (e.g., child, violence, hate symbols, drugs).
- The output of the model is **probabilistic**, not definitive. Consider it a **screening tool**, not a sole decision-maker.
- The labels reflect the model's best interpretation of visual signals — not moral or legal judgments.
- Always **review flagged content manually** in high-stakes applications.
## **Ethical Reminder**
This model was built to **help** create safer digital ecosystems. **Do not misuse it** for exploitation, surveillance without consent, or personal gain at the expense of others. By using this model, you agree to act responsibly and ethically, keeping safety and privacy a top priority.
# **Sample Inference**
| Screenshot 1 | Screenshot 2 | Screenshot 3 |
|--------------|--------------|--------------|
|  |  |  |
| Screenshot 4 | Screenshot 5 | Screenshot 6 |
|--------------|--------------|--------------|
|  |  |  |
| Screenshot 7 |
|--------------|
|  |
# **Intended Use:**
The **Mature-Content-Detection** model is designed to classify visual content for moderation and filtering purposes. Potential use cases include:
- **Content Moderation:** Automatically flagging explicit or sensitive content on platforms.
- **Parental Control Systems:** Filtering inappropriate material for child-safe environments.
- **Search Engine Filtering:** Improving search results by categorizing Un-Safe content.
- **Dataset Cleaning:** Assisting in curation of safe training datasets for other AI models. |