Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,9 @@ base_model:
|
|
6 |
- google/siglip2-base-patch16-224
|
7 |
pipeline_tag: image-classification
|
8 |
library_name: transformers
|
|
|
9 |
---
|
10 |
-
|
11 |
-

|
12 |
|
13 |
# **Guard-Against-Unsafe-Content-Siglip2**
|
14 |
|
@@ -79,4 +79,4 @@ The **Guard-Against-Unsafe-Content-Siglip2** model is designed to detect **inapp
|
|
79 |
- **NSFW Content Detection:** Identifying images containing explicit content to help filter inappropriate material.
|
80 |
- **Content Moderation:** Assisting platforms in filtering out unsafe images before they are shared publicly.
|
81 |
- **Parental Controls:** Enabling automated filtering of explicit images in child-friendly environments.
|
82 |
-
- **Safe Image Classification:** Helping AI-powered applications distinguish between safe and unsafe content for appropriate usage.
|
|
|
6 |
- google/siglip2-base-patch16-224
|
7 |
pipeline_tag: image-classification
|
8 |
library_name: transformers
|
9 |
+
new_version: prithivMLmods/Guard-Against-Unsafe-Content2-Siglip2
|
10 |
---
|
11 |
+

|
|
|
12 |
|
13 |
# **Guard-Against-Unsafe-Content-Siglip2**
|
14 |
|
|
|
79 |
- **NSFW Content Detection:** Identifying images containing explicit content to help filter inappropriate material.
|
80 |
- **Content Moderation:** Assisting platforms in filtering out unsafe images before they are shared publicly.
|
81 |
- **Parental Controls:** Enabling automated filtering of explicit images in child-friendly environments.
|
82 |
+
- **Safe Image Classification:** Helping AI-powered applications distinguish between safe and unsafe content for appropriate usage.
|