Spaces:
Build error
Build error
Update app.py
Browse files
app.py
CHANGED
|
@@ -73,11 +73,13 @@ mksdown = """# π Welcome To The Friendly Text Moderation for Twitter (X) Post
|
|
| 73 |
"%Toxic": 65.95,
|
| 74 |
"%Safe": 34.05
|
| 75 |
}
|
| 76 |
-
|
|
|
|
|
|
|
| 77 |
* Open API analyzes tweet for 13 categories and displays them with %
|
| 78 |
* The real-world dataset is from the "Toxic Tweets Dataset" (https://www.kaggle.com/datasets/ashwiniyer176/toxic-tweets-dataset/data)
|
| 79 |
---
|
| 80 |
-
# π "AI Solution Architect" Course by ELVTR
|
| 81 |
"""
|
| 82 |
# Function to get toxicity scores from OpenAI
|
| 83 |
def get_toxicity_openai(tweet, tolerance_dropdown):
|
|
|
|
| 73 |
"%Toxic": 65.95,
|
| 74 |
"%Safe": 34.05
|
| 75 |
}
|
| 76 |
+
```
|
| 77 |
+
---
|
| 78 |
+
* In addition we have "ADJUSTED TOXICITY" as well based on the tolerance level User has selected.
|
| 79 |
* Open API analyzes tweet for 13 categories and displays them with %
|
| 80 |
* The real-world dataset is from the "Toxic Tweets Dataset" (https://www.kaggle.com/datasets/ashwiniyer176/toxic-tweets-dataset/data)
|
| 81 |
---
|
| 82 |
+
# π Project for "AI Solution Architect" Course by ELVTR
|
| 83 |
"""
|
| 84 |
# Function to get toxicity scores from OpenAI
|
| 85 |
def get_toxicity_openai(tweet, tolerance_dropdown):
|