doberst commited on
Commit
5c72b0c
1 Parent(s): 5552c2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -4,19 +4,19 @@ inference: false
4
  tags: [green, p1, llmware-encoder, ov]
5
  ---
6
 
7
- # unitary-toxic-roberta-ov
8
 
9
- **unitary-toxic-roberta-ov** is a toxicity classifier from [unitary/unbiased-toxic-roberta](https://www.huggingface.com/unitary/unbiased-toxic-roberta), packaged in OpenVino format.
10
 
11
- The classifier can be used to evaluate toxic content in a prompt or in model output.
12
 
13
  ### Model Description
14
 
15
- - **Developed by:** unitary
16
  - **Quantized by:** llmware
17
- - **Model type:** roberta
18
- - **Parameters:** 125 million
19
- - **Model Parent:** unitary/unbiased-toxic-roberta
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
  - **Uses:** Prompt safety
 
4
  tags: [green, p1, llmware-encoder, ov]
5
  ---
6
 
7
+ # valurank-bias-ov
8
 
9
+ **valurank-bias-ov** is a bias classifier from [valurank/distilroberta-bias](https://www.huggingface.com/valurank/distilroberta-bias), packaged in OpenVino format.
10
 
11
+ The classifier can be used to evaluate bias content in a prompt or in model output.
12
 
13
  ### Model Description
14
 
15
+ - **Developed by:** valurank
16
  - **Quantized by:** llmware
17
+ - **Model type:** distilroberta
18
+ - **Parameters:** 82 million
19
+ - **Model Parent:** valurank/distilroberta-bias
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
  - **Uses:** Prompt safety