ashjo317 commited on
Commit
84c3b55
Β·
verified Β·
1 Parent(s): e4f27c1

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +4 -2
app.py CHANGED
@@ -73,11 +73,13 @@ mksdown = """# πŸ˜ƒ Welcome To The Friendly Text Moderation for Twitter (X) Post
73
  "%Toxic": 65.95,
74
  "%Safe": 34.05
75
  }
76
-
 
 
77
  * Open API analyzes tweet for 13 categories and displays them with %
78
  * The real-world dataset is from the "Toxic Tweets Dataset" (https://www.kaggle.com/datasets/ashwiniyer176/toxic-tweets-dataset/data)
79
  ---
80
- # 🌟 "AI Solution Architect" Course by ELVTR
81
  """
82
  # Function to get toxicity scores from OpenAI
83
  def get_toxicity_openai(tweet, tolerance_dropdown):
 
73
  "%Toxic": 65.95,
74
  "%Safe": 34.05
75
  }
76
+ ```
77
+ ---
78
+ * In addition we have "ADJUSTED TOXICITY" as well based on the tolerance level User has selected.
79
  * Open API analyzes tweet for 13 categories and displays them with %
80
  * The real-world dataset is from the "Toxic Tweets Dataset" (https://www.kaggle.com/datasets/ashwiniyer176/toxic-tweets-dataset/data)
81
  ---
82
+ # 🌟 Project for "AI Solution Architect" Course by ELVTR
83
  """
84
  # Function to get toxicity scores from OpenAI
85
  def get_toxicity_openai(tweet, tolerance_dropdown):