--- license: apache-2.0 language: - en base_model: - google/siglip2-base-patch16-224 pipeline_tag: image-classification library_name: transformers tags: - Content Safety - Mature - Content - Detection - Enticing - Sensual - Neutral - Anime - ViT - multi-label - digital spaces - SigLIP2 - vision-language encoder - single-label - '2e-4' --- ![MCD.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/RtUfzeu7llyblWu4EEmMd.png) # **Mature-Content-Detection** > **Mature-Content-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into various mature or neutral content categories using the **SiglipForImageClassification** architecture. > [!Note] > Use this model to support positive, safe, and respectful digital spaces. Misuse is strongly discouraged and may violate platform or regional policies. This model doesn't generate any unsafe content, as it is a classification model and does not fall under the category of models not suitable for all audiences. > [!Important] > Neutral = Safe / Normal ```py Classification Report: precision recall f1-score support Anime Picture 0.8130 0.8066 0.8098 5600 Hentai 0.8317 0.8134 0.8224 4180 Neutral 0.8344 0.7785 0.8055 5503 Pornography 0.9161 0.8464 0.8799 5600 Enticing or Sensual 0.7699 0.8979 0.8290 5600 accuracy 0.8296 26483 macro avg 0.8330 0.8286 0.8293 26483 weighted avg 0.8331 0.8296 0.8298 26483 ``` --- ![download (2).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ca-xwkO8_dmywConDiO7g.png) --- The model categorizes images into five classes: - **Class 0:** Anime Picture - **Class 1:** Hentai - **Class 2:** Neutral - **Class 3:** Pornography - **Class 4:** Enticing or Sensual # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor from transformers import SiglipForImageClassification from transformers.image_utils import load_image from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Mature-Content-Detection" model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Updated labels labels = { "0": "Anime Picture", "1": "Hentai", "2": "Neutral", "3": "Pornography", "4": "Enticing or Sensual" } def mature_content_detection(image): """Predicts the type of content in the image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))} return predictions # Create Gradio interface iface = gr.Interface( fn=mature_content_detection, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Prediction Scores"), title="Mature Content Detection", description="Upload an image to classify whether it contains anime, hentai, neutral, pornographic, or enticing/sensual content." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Guidelines for Use Mature-Content-Detection** The **Mature-Content-Detection** model is a computer vision classifier designed to detect and categorize adult-themed and anime-based content. It supports responsible content moderation and filtering across digital platforms. To ensure the ethical and intended use of the model, please follow the guidelines below: # **Recommended Use Cases** - **Content Moderation:** Automatically filter explicit or suggestive content in online communities, forums, or media-sharing platforms. - **Parental Controls:** Enable content-safe environments for children by flagging inappropriate images. - **Dataset Curation:** Clean and label image datasets for safe and compliant ML training. - **Digital Wellbeing:** Assist in building safer AI and web experiences by identifying sensitive media content. - **Search Engine Filtering:** Improve content relevance and safety in image-based search results. # **Prohibited / Discouraged Use** - **Malicious Intent:** Do not use the model to harass, shame, expose, or target individuals or communities. - **Invasion of Privacy:** Avoid deploying the model on private or sensitive user data without proper consent. - **Illegal Activities:** Never use the model for generating, distributing, or flagging illegal content. - **Bias Amplification:** Do not rely solely on this model to make sensitive moderation decisions. Always include human oversight, especially where reputational or legal consequences are involved. - **Manipulation or Misrepresentation:** Avoid using this model to manipulate or misrepresent content classification in unethical ways. # **Important Notes** - This model works best on **anime and adult content** images. It is **not designed for general images** or unrelated categories (e.g., child, violence, hate symbols, drugs). - The output of the model is **probabilistic**, not definitive. Consider it a **screening tool**, not a sole decision-maker. - The labels reflect the model's best interpretation of visual signals — not moral or legal judgments. - Always **review flagged content manually** in high-stakes applications. ## **Ethical Reminder** This model was built to **help** create safer digital ecosystems. **Do not misuse it** for exploitation, surveillance without consent, or personal gain at the expense of others. By using this model, you agree to act responsibly and ethically, keeping safety and privacy a top priority. # **Sample Inference** | Screenshot 1 | Screenshot 2 | Screenshot 3 | |--------------|--------------|--------------| | ![Screenshot 1](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Hen3_GPI2p4Rn2fO5gLfU.png) | ![Screenshot 2](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/2IQ81Dq8CYvslyAY4v-8m.png) | ![Screenshot 3](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/WN5MyOpU8m0-etywI5ppS.png) | | Screenshot 4 | Screenshot 5 | Screenshot 6 | |--------------|--------------|--------------| | ![Screenshot 4](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/W5WuycqUo2aidGZ-Ez9np.png) | ![Screenshot 5](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/m6lzVo35rPNhmHrphvo2c.png) | ![Screenshot 6](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/micYbWtI5Tdewxf2KljOv.png) | | Screenshot 7 | |--------------| | ![Screenshot 7](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/PW0nrUP7F3eVy1wjTbOf-.png) | # **Intended Use:** The **Mature-Content-Detection** model is designed to classify visual content for moderation and filtering purposes. Potential use cases include: - **Content Moderation:** Automatically flagging explicit or sensitive content on platforms. - **Parental Control Systems:** Filtering inappropriate material for child-safe environments. - **Search Engine Filtering:** Improving search results by categorizing Un-Safe content. - **Dataset Cleaning:** Assisting in curation of safe training datasets for other AI models.