AhilanPonnusamy commited on
Commit
55906fb
·
verified ·
1 Parent(s): c29b9d8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - sentiment-analysis
5
+ - distillation
6
+ - small-model
7
+ - smollm
8
+ - nlp
9
+ model-index:
10
+ - name: distilled-smollm-sentiment-analyzer
11
+ results:
12
+ - task:
13
+ type: sentiment-analysis
14
+ dataset:
15
+ name: Custom Distillation Dataset
16
+ type: text
17
+ metrics:
18
+ - name: Accuracy
19
+ type: accuracy
20
+ value: ~XX.XX%
21
+ ---
22
+
23
+ # Distilled SmollM Sentiment Analyzer
24
+
25
+ This model is a distilled version of a larger sentiment analysis model, fine-tuned on custom datasets using the [Hugging Face Transformers](https://huggingface.co/docs/transformers) library. It is designed for **efficient, lightweight sentiment analysis** tasks in resource-constrained environments.
26
+
27
+ ✅ **Key Features:**
28
+ - Compact model architecture (`SmollM`)
29
+ - Distilled for speed and smaller size
30
+ - Fine-tuned for sentiment classification tasks
31
+ - Supports labels: `negative`, `neutral`, `positive`
32
+
33
+ ---
34
+
35
+ ## 🔍 Model Details
36
+
37
+ | Model | Distilled SmollM Sentiment Analyzer |
38
+ |:------|:------------------------------------|
39
+ | Base Model | SmollM |
40
+ | Task | Sentiment Analysis (3-class: negative, neutral, positive) |
41
+ | Dataset | Custom Yelp Review + Distilled Dataset |
42
+ | Framework | Hugging Face Transformers |
43
+ | Distillation Method | Knowledge Distillation |
44
+ | Accuracy | ~75% (Relative Accuracy - compared with Teacher model gemma3:12b) |
45
+
46
+ ---
47
+
48
+ ## 🚀 Usage
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
52
+ import torch
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
55
+ model = AutoModelForSequenceClassification.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
56
+
57
+ inputs = tokenizer("The movie was amazing!", return_tensors="pt")
58
+ with torch.no_grad():
59
+ outputs = model(**inputs)
60
+ logits = outputs.logits
61
+ predicted_class_id = logits.argmax().item()
62
+
63
+ label_map = {0: "negative", 1: "neutral", 2: "positive"}
64
+ print("Predicted sentiment:", label_map[predicted_class_id])