prithivMLmods commited on
Commit
9a0b7ff
·
verified ·
1 Parent(s): db6ede7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,6 +14,8 @@ The model categorizes images into two classes:
14
  - **Class 0:** "Unsafe Content" – indicating that the image contains vulgarity, nudity, or explicit content.
15
  - **Class 1:** "Safe Content" – indicating that the image is appropriate and does not contain any unsafe elements.
16
 
 
 
17
  ```python
18
  !pip install -q transformers torch pillow gradio
19
  ```
@@ -68,6 +70,4 @@ The **Guard-Against-Unsafe-Content-Siglip2** model is designed to detect **inapp
68
  - **NSFW Content Detection:** Identifying images containing explicit content to help filter inappropriate material.
69
  - **Content Moderation:** Assisting platforms in filtering out unsafe images before they are shared publicly.
70
  - **Parental Controls:** Enabling automated filtering of explicit images in child-friendly environments.
71
- - **Safe Image Classification:** Helping AI-powered applications distinguish between safe and unsafe content for appropriate usage.
72
-
73
- This model is intended for **research, content moderation, and automated safety applications**, rather than **real-time detection** of explicit content.
 
14
  - **Class 0:** "Unsafe Content" – indicating that the image contains vulgarity, nudity, or explicit content.
15
  - **Class 1:** "Safe Content" – indicating that the image is appropriate and does not contain any unsafe elements.
16
 
17
+ # **Run with Transformers**
18
+
19
  ```python
20
  !pip install -q transformers torch pillow gradio
21
  ```
 
70
  - **NSFW Content Detection:** Identifying images containing explicit content to help filter inappropriate material.
71
  - **Content Moderation:** Assisting platforms in filtering out unsafe images before they are shared publicly.
72
  - **Parental Controls:** Enabling automated filtering of explicit images in child-friendly environments.
73
+ - **Safe Image Classification:** Helping AI-powered applications distinguish between safe and unsafe content for appropriate usage.