Pieter Delobelle
commited on
Commit
·
c6e31d4
1
Parent(s):
4c093e7
Update README.md
Browse files
README.md
CHANGED
|
@@ -34,7 +34,16 @@ tags:
|
|
| 34 |
|
| 35 |
# RobBERT finetuned for sentiment analysis on DBRD
|
| 36 |
|
| 37 |
-
This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
# Training data and setup
|
| 40 |
We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019).
|
|
|
|
| 34 |
|
| 35 |
# RobBERT finetuned for sentiment analysis on DBRD
|
| 36 |
|
| 37 |
+
This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing.
|
| 38 |
+
|
| 39 |
+
We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff:
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
| Model | Identifier | #Params. | Accuracy |
|
| 43 |
+
|----------------|------------------------------------------------------------------------|-----------|-----------|
|
| 44 |
+
| RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 110 M |93.3* |
|
| 45 |
+
| Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 74 M |92.9 |
|
| 46 |
+
*The results of RobBERT are of a different run than the one reported in the paper.
|
| 47 |
|
| 48 |
# Training data and setup
|
| 49 |
We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019).
|