Meeex2 commited on
Commit
b072a6d
·
verified ·
1 Parent(s): 0246d32

original readme

Browse files
Files changed (1) hide show
  1. README.md +22 -28
README.md CHANGED
@@ -1,25 +1,4 @@
1
  ---
2
- license: apache-2.0
3
- base_model: distilbert-base-uncased
4
- model_name: Diffguard
5
- paper_link: https://arxiv.org/pdf/2412.00064
6
- pipeline_tag: text-classification
7
- tags:
8
- - Transformers
9
- - PyTorch
10
- - safety
11
- - inappropriate
12
- - distilbert
13
- - DiffGuard
14
- language:
15
- - en
16
- datasets:
17
- - eliasalbouzidi/NSFW-Safe-Dataset
18
- metrics:
19
- - f1
20
- - accuracy
21
- - precision
22
- - recall
23
  widget:
24
  - text: A family hiking in the mountains
25
  example_title: Safe
@@ -33,15 +12,33 @@ widget:
33
  example_title: Nsfw
34
  - text: A mass shooting
35
  example_title: Nsfw
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  model-index:
37
- - name: Diffguard
38
  results:
39
  - task:
40
  name: Text Classification
41
  type: text-classification
42
  dataset:
43
  name: NSFW-Safe-Dataset
44
- type: eliasalbouzidi/NSFW-Safe-Dataset
45
  metrics:
46
  - name: F1
47
  type: f1
@@ -51,7 +48,7 @@ model-index:
51
  value: 0.98
52
  ---
53
 
54
- # DiffGuard Model Card
55
 
56
  <!-- Provide a quick summary of what the model is/does. -->
57
 
@@ -74,10 +71,6 @@ The model can be used directly to classify text into one of the two classes. It
74
  - **Language (NLP):** English
75
  - **License:** apache-2.0
76
 
77
- ## Technical Paper:
78
-
79
- A more detailed technical overview of the model and the dataset can be found [here](https://arxiv.org/pdf/2412.00064).
80
-
81
  ### Uses
82
 
83
  The model can be integrated into larger systems for content moderation or filtering.
@@ -180,5 +173,6 @@ from transformers import pipeline
180
  pipe = pipeline("text-classification", model="eliasalbouzidi/distilbert-nsfw-text-classifier")
181
  ```
182
 
 
183
  ## Contact
184
  Please reach out to [email protected] if you have any questions or feedback.
 
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  widget:
3
  - text: A family hiking in the mountains
4
  example_title: Safe
 
12
  example_title: Nsfw
13
  - text: A mass shooting
14
  example_title: Nsfw
15
+ base_model: distilbert-base-uncased
16
+ license: apache-2.0
17
+ language:
18
+ - en
19
+ metrics:
20
+ - f1
21
+ - accuracy
22
+ - precision
23
+ - recall
24
+ pipeline_tag: text-classification
25
+ tags:
26
+ - Transformers
27
+ - ' PyTorch'
28
+ - safety
29
+ - innapropriate
30
+ - distilbert
31
+ datasets:
32
+ - eliasalbouzidi/NSFW-Safe-Dataset
33
  model-index:
34
+ - name: NSFW-Safe-Dataset
35
  results:
36
  - task:
37
  name: Text Classification
38
  type: text-classification
39
  dataset:
40
  name: NSFW-Safe-Dataset
41
+ type: .
42
  metrics:
43
  - name: F1
44
  type: f1
 
48
  value: 0.98
49
  ---
50
 
51
+ # Model Card
52
 
53
  <!-- Provide a quick summary of what the model is/does. -->
54
 
 
71
  - **Language (NLP):** English
72
  - **License:** apache-2.0
73
 
 
 
 
 
74
  ### Uses
75
 
76
  The model can be integrated into larger systems for content moderation or filtering.
 
173
  pipe = pipeline("text-classification", model="eliasalbouzidi/distilbert-nsfw-text-classifier")
174
  ```
175
 
176
+
177
  ## Contact
178
  Please reach out to [email protected] if you have any questions or feedback.