File size: 3,136 Bytes
fe89fc4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: mit
library_name: transformers
pipeline_tag: text-classification
tags:
- sentiment-analysis
- text-classification
- distilbert
- fine-tuned
- nlp
language:
- en
datasets:
- sentiment140
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: my-sentiment-model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: sentiment140
type: sentiment140
metrics:
- name: Accuracy
type: accuracy
value: 0.85
- name: F1 Score (Macro)
type: f1
value: 0.84
---
# My DistilBERT Sentiment Model
Fine-tuned DistilBERT for 3-class sentiment classification (negative, neutral, positive).
## Model Description
This model is a fine-tuned version of DistilBERT-base-uncased for sentiment analysis. It has been trained to classify text into three sentiment categories:
- **Negative** (0)
- **Neutral** (1)
- **Positive** (2)
## Intended Uses
This model is intended for sentiment analysis tasks on English text. It can be used to:
- Analyze customer feedback and reviews
- Monitor social media sentiment
- Classify emotions in text data
- Support content moderation systems
## Limitations
- Trained primarily on English text
- May not perform well on domain-specific jargon
- Performance may vary on very short or very long texts
- Potential bias from training data
## Training Details
- **Base Model**: distilbert-base-uncased
- **Training Epochs**: 2
- **Batch Size**: 8
- **Learning Rate**: 3e-5
- **Max Sequence Length**: 128
- **Optimizer**: AdamW
- **Weight Decay**: 0.01
## Model Performance
The model achieves the following performance on the test set:
- **Accuracy**: 85%
- **F1-Score (Macro)**: 84%
- **F1-Score (Weighted)**: 85%
## Usage
Install the required dependencies:
```bash
pip install transformers torch
```
Load and use the model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "your-username/my-sentiment-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Prepare text
text = "I love this product!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(probabilities, dim=-1).item()
# Map prediction to label
labels = {0: "negative", 1: "neutral", 2: "positive"}
confidence = probabilities[0][predicted_class].item()
print(f"Text: {text}")
print(f"Sentiment: {labels[predicted_class]} (confidence: {confidence:.2%})")
```
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{my-sentiment-model,
author = {Your Name},
title = {Fine-tuned DistilBERT for Sentiment Analysis},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/your-username/my-sentiment-model}
}
```
## License
This model is released under the MIT License. |