File size: 1,491 Bytes
a04b256
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
language: vi
tags:
- spam-detection
- vietnamese
- bartpho
license: apache-2.0
datasets:
- visolex/ViSpamReviews
metrics:
- accuracy
- f1
model-index:
- name: bartpho-spam-classification
  results:
  - task:
      type: text-classification
      name: Spam Detection (Multi-Class)
    dataset:
      name: ViSpamReviews
      type: custom
    metrics:
    - name: Accuracy
      type: accuracy
      value: <INSERT_ACCURACY>
    - name: F1 Score
      type: f1
      value: <INSERT_F1_SCORE>
base_model:
- vinai/bartpho-syllable
pipeline_tag: text-classification
---

# BARTPho-Spam-MultiClass

Fine-tuned from [`vinai/bartpho-syllable`](https://huggingface.co/vinai/bartpho-syllable) on **ViSpamReviews** (multi-class).

* **Task**: 4-way classification
* **Dataset**: [ViSpamReviews](https://huggingface.co/datasets/visolex/ViSpamReviews)
* **Hyperparameters**

  * Batch size: 32
  * LR: 3e-5
  * Epochs: 100
  * Max seq len: 256
## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("visolex/bartpho-spam-classification")
model = AutoModelForSequenceClassification.from_pretrained("visolex/bartpho-spam-classification")

text = "Đánh giá quá chung chung, không liên quan."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
pred = model(**inputs).logits.argmax(dim=-1).item()
label_map = {0: "NO-SPAM",1: "SPAM-1",2: "SPAM-2",3: "SPAM-3"}
print(label_map[pred])
```