File size: 3,726 Bytes
abdb5a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
datasets:
- syedkhalid076/Sentiment-Analysis-Over-sampled
language:
- en
metrics:
- accuracy: 0.9019906657776932
- accuracy
model_name: RoBERTa Sentiment Analysis Model v2
base_model: roberta-base
library_name: transformers
tags:
- Text Classification
- Transformers
- Safetensors
- English
- roberta
- Inference Endpoints
pipeline_tag: text-classification
---


# RoBERTa Sentiment Analysis Model v2

This repository hosts a fine-tuned [RoBERTa](https://huggingface.co/roberta-base) model for sentiment analysis. The model classifies text into three categories: **Negative (0)**, **Neutral (1)**, and **Positive (2)**. It has been fine-tuned on the [syedkhalid076/Sentiment-Analysis-Over-sampled](https://huggingface.co/datasets/syedkhalid076/Sentiment-Analysis-Over-sampled) dataset and achieves high accuracy.
The Model is Trained specifically for Feedback Sentiment Analysis for UX Research, but it does perform well on other Sentiment Analysis tasks.

---

## Model Details

- **Base Model**: [RoBERTa-base](https://huggingface.co/roberta-base)
- **Number of Labels**: 3 (0:Negative, 1:Neutral, 2:Positive)
- **Model Size**: 125M parameters
- **Language**: English (`en`)
- **Metrics**: Accuracy: **90.20%**
- **Tensor Type**: FP32
- **Dataset**: [syedkhalid076/Sentiment-Analysis-Over-sampled](https://huggingface.co/datasets/syedkhalid076/Sentiment-Analysis-Over-sampled)
- **Library**: [Transformers](https://github.com/huggingface/transformers)
- **File Format**: [Safetensors](https://github.com/huggingface/safetensors)

---

## Features

- **Text Classification**: Identify the sentiment of input text as Negative, Neutral, or Positive.
- **High Accuracy**: Achieves 90.20% accuracy on the evaluation dataset.
- **Hosted on Hugging Face**: Ready-to-use inference endpoints for quick deployment.
- **Efficient Inference**: Lightweight and efficient, supporting FP32 tensors.

---

## Installation

To use this model, ensure you have the `transformers` library installed:

```bash
pip install transformers
```

---

## Usage

Here’s how you can load the model and tokenizer and perform inference:

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("syedkhalid076/RoBERTa-Sentimental-Analysis-Model")
model = AutoModelForSequenceClassification.from_pretrained("syedkhalid076/RoBERTa-Sentimental-Analysis-Model")

# Define input text
text = "I absolutely love this product! It's fantastic."

# Tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)

# Perform inference
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=-1).item()

# Print results
sentiment_labels = {0: "Negative", 1: "Neutral", 2: "Positive"}
print(f"Predicted sentiment: {sentiment_labels[predicted_class]}")
```

---

## Dataset

This model is fine-tuned on the [syedkhalid076/Sentiment-Analysis-Over-sampled](https://huggingface.co/datasets/syedkhalid076/Sentiment-Analysis-Over-sampled) dataset. The dataset has been carefully preprocessed and oversampled to ensure balanced label representation and improve model performance.

---

## Performance

The model was evaluated on a test set and achieved the following metrics:

- **Accuracy**: 90.20% (0.9019906657776932)

The evaluation strategy includes validation after each epoch and logging metrics for tracking training progress.

---

## Inference Endpoints

You can use the Hugging Face Inference API to deploy and test this model in production environments.

---


## Author

**Syed Khalid Hussain**  
UX Designer & Developer  
Specializing in crafting user-centric digital experiences.