Update README.md
Browse files
README.md
CHANGED
@@ -13,4 +13,51 @@ tags:
|
|
13 |
|
14 |
## Model Training
|
15 |
|
16 |
-
The sentiment analysis model is trained using a Support Vector Machine (SVM) classifier with a linear kernel. The cleaned text data is transformed into a bag-of-words representation using the CountVectorizer. The trained model is saved as `Sentiment_classifier_model.joblib`, and the corresponding TF-IDF vectorizer is saved as `vectorizer_model.joblib`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
## Model Training
|
15 |
|
16 |
+
The sentiment analysis model is trained using a Support Vector Machine (SVM) classifier with a linear kernel. The cleaned text data is transformed into a bag-of-words representation using the CountVectorizer. The trained model is saved as `Sentiment_classifier_model.joblib`, and the corresponding TF-IDF vectorizer is saved as `vectorizer_model.joblib`.
|
17 |
+
|
18 |
+
# Model Usage :
|
19 |
+
|
20 |
+
from huggingface_hub import hf_hub_download
|
21 |
+
import joblib
|
22 |
+
model = joblib.load(
|
23 |
+
hf_hub_download("DineshKumar1329/Sentiment_Analysis", "sklearn_model.joblib")
|
24 |
+
)
|
25 |
+
# only load pickle files from sources you trust
|
26 |
+
# read more about it here https://skops.readthedocs.io/en/stable/persistence.html
|
27 |
+
|
28 |
+
# Load the TF-IDF vectorizer used during training
|
29 |
+
tfidf_vectorizer = joblib.load('/content/vectorizer_model.joblib') # Replace with your actual filename
|
30 |
+
|
31 |
+
|
32 |
+
# Take user input
|
33 |
+
user_input = input("Enter a sentence: ")
|
34 |
+
|
35 |
+
# Clean the user input
|
36 |
+
cleaned_input = clean_text(user_input)
|
37 |
+
|
38 |
+
# Transform the cleaned text data using the TF-IDF vectorizer
|
39 |
+
input_matrix = tfidf_vectorizer.transform([cleaned_input])
|
40 |
+
|
41 |
+
# Make prediction
|
42 |
+
prediction = model.predict(input_matrix)[0]
|
43 |
+
|
44 |
+
# Display the prediction
|
45 |
+
print(f"Predicted Sentiment: {prediction}")
|
46 |
+
# Create a DataFrame with the results
|
47 |
+
df_result = pd.DataFrame({'User_Input': [user_input], 'Predicted_Sentiment': [prediction]})
|
48 |
+
|
49 |
+
# Save the DataFrame to an Excel file (append if the file already exists)
|
50 |
+
excel_filename = '/content/output_predictions.xlsx' # Replace with your desired filename
|
51 |
+
try:
|
52 |
+
# Load existing predictions from the Excel file
|
53 |
+
df_existing = pd.read_excel(excel_filename)
|
54 |
+
|
55 |
+
# Append the new predictions to the existing DataFrame
|
56 |
+
df_combined = pd.concat([df_existing, df_result], ignore_index=True)
|
57 |
+
|
58 |
+
except FileNotFoundError:
|
59 |
+
# If the file doesn't exist, create a new DataFrame
|
60 |
+
df_combined = df_result
|
61 |
+
|
62 |
+
# Save the combined DataFrame to the Excel file
|
63 |
+
df_combined.to_excel(excel_filename, index=False)
|