Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,7 @@ metrics:
|
|
10 |
model-index:
|
11 |
- name: roberta-finetuned-WebClassification
|
12 |
results: []
|
|
|
13 |
---
|
14 |
|
15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -17,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
# roberta-finetuned-WebClassification
|
19 |
|
20 |
-
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
|
21 |
It achieves the following results on the evaluation set:
|
22 |
- Loss: 0.3473
|
23 |
- Accuracy: 0.9504
|
@@ -27,15 +28,31 @@ It achieves the following results on the evaluation set:
|
|
27 |
|
28 |
## Model description
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## Intended uses & limitations
|
33 |
|
34 |
-
|
35 |
|
36 |
## Training and evaluation data
|
37 |
|
38 |
-
|
39 |
|
40 |
## Training procedure
|
41 |
|
@@ -71,4 +88,4 @@ The following hyperparameters were used during training:
|
|
71 |
- Transformers 4.16.2
|
72 |
- Pytorch 1.9.1
|
73 |
- Datasets 1.18.4
|
74 |
-
- Tokenizers 0.11.6
|
|
|
10 |
model-index:
|
11 |
- name: roberta-finetuned-WebClassification
|
12 |
results: []
|
13 |
+
pipeline_tag: text-classification
|
14 |
---
|
15 |
|
16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
18 |
|
19 |
# roberta-finetuned-WebClassification
|
20 |
|
21 |
+
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Web Classification Dataset](https://www.kaggle.com/datasets/hetulmehta/website-classification).
|
22 |
It achieves the following results on the evaluation set:
|
23 |
- Loss: 0.3473
|
24 |
- Accuracy: 0.9504
|
|
|
28 |
|
29 |
## Model description
|
30 |
|
31 |
+
The model classifies websites into the following categories:
|
32 |
+
- "0": "Adult",
|
33 |
+
- "1": "Business/Corporate",
|
34 |
+
- "2": "Computers and Technology",
|
35 |
+
- "3": "E-Commerce",
|
36 |
+
- "4": "Education",
|
37 |
+
- "5": "Food",
|
38 |
+
- "6": "Forums",
|
39 |
+
- "7": "Games",
|
40 |
+
- "8": "Health and Fitness",
|
41 |
+
- "9": "Law and Government",
|
42 |
+
- "10": "News",
|
43 |
+
- "11": "Photography",
|
44 |
+
- "12": "Social Networking and Messaging",
|
45 |
+
- "13": "Sports",
|
46 |
+
- "14": "Streaming Services",
|
47 |
+
- "15": "Travel"
|
48 |
|
49 |
## Intended uses & limitations
|
50 |
|
51 |
+
Web classification in English (for now).
|
52 |
|
53 |
## Training and evaluation data
|
54 |
|
55 |
+
Trained and tested on a 80/20 split of the [Web Classification Dataset](https://www.kaggle.com/datasets/hetulmehta/website-classification).
|
56 |
|
57 |
## Training procedure
|
58 |
|
|
|
88 |
- Transformers 4.16.2
|
89 |
- Pytorch 1.9.1
|
90 |
- Datasets 1.18.4
|
91 |
+
- Tokenizers 0.11.6
|