Danish BERT Tone for the detection of subjectivity/objectivity

The BERT Tone model detects whether a text (in Danish) is subjective or objective. The model is based on the finetuning of the pretrained Danish BERT model by BotXO.

See the DaNLP documentation for more details.

Here is how to use the model:

from transformers import BertTokenizer, BertForSequenceClassification

model = BertForSequenceClassification.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-subjectivivity-classification-base")

Training data

The data used for training come from the Twitter Sentiment and EuroParl sentiment 2 datasets.

Downloads last month
133
Safetensors
Model size
111M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train alexandrainst/da-subjectivivity-classification-base