Improve model card: Add pipeline tag, update license, expand details and links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +153 -14
README.md CHANGED
@@ -1,24 +1,43 @@
1
  ---
 
 
2
  library_name: transformers
3
- tags:
4
- - generated_from_trainer
5
  metrics:
6
  - accuracy
7
  - f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  model-index:
9
  - name: mdeberta-v3-base-subjectivity-sentiment-multilingual
10
  results: []
11
- license: mit
12
- base_model:
13
- - microsoft/mdeberta-v3-base
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
  # mdeberta-v3-base-subjectivity-sentiment-multilingual
20
 
21
- This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025](arxiv.org/abs/2507.11764).
 
 
 
 
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.7762
24
  - Macro F1: 0.7580
@@ -31,15 +50,120 @@ It achieves the following results on the evaluation set:
31
 
32
  ## Model description
33
 
34
- More information needed
 
 
 
 
 
 
 
 
 
35
 
36
  ## Intended uses & limitations
37
 
38
- More information needed
 
 
 
 
 
 
 
 
 
39
 
40
  ## Training and evaluation data
41
 
42
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ## Training procedure
45
 
@@ -65,10 +189,25 @@ The following hyperparameters were used during training:
65
  | 0.3579 | 5.0 | 2010 | 0.7443 | 0.7476 | 0.7485 | 0.7614 | 0.7154 | 0.6440 | 0.8045 | 0.7518 |
66
  | 0.3579 | 6.0 | 2412 | 0.7762 | 0.7580 | 0.7558 | 0.7614 | 0.7100 | 0.6878 | 0.7336 | 0.7676 |
67
 
68
-
69
  ### Framework versions
70
 
71
  - Transformers 4.49.0
72
  - Pytorch 2.5.1+cu121
73
  - Datasets 3.3.1
74
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - microsoft/mdeberta-v3-base
4
  library_name: transformers
5
+ license: cc-by-4.0
 
6
  metrics:
7
  - accuracy
8
  - f1
9
+ tags:
10
+ - generated_from_trainer
11
+ - subjectivity-detection
12
+ - multilingual
13
+ - sentiment
14
+ - news
15
+ - mdeberta-v3
16
+ language:
17
+ - ar
18
+ - de
19
+ - en
20
+ - it
21
+ - bg
22
+ - el
23
+ - pl
24
+ - ro
25
+ - uk
26
+ datasets:
27
+ - MatteoFasulo/clef2025_checkthat_task1_subjectivity
28
+ pipeline_tag: text-classification
29
  model-index:
30
  - name: mdeberta-v3-base-subjectivity-sentiment-multilingual
31
  results: []
 
 
 
32
  ---
33
 
 
 
 
34
  # mdeberta-v3-base-subjectivity-sentiment-multilingual
35
 
36
+ This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025](https://arxiv.org/abs/2507.11764).
37
+
38
+ The official code repository can be found here: [https://github.com/MatteoFasulo/clef2025-checkthat](https://github.com/MatteoFasulo/clef2025-checkthat)
39
+ Explore related models and results on the Hugging Face Collection: [AI Wizards @ CLEF 2025 - CheckThat! Lab - Task 1 Subjectivity](https://huggingface.co/collections/MatteoFasulo/clef-2025-checkthat-lab-task-1-subjectivity-6878f0199d302acdfe2ceddb)
40
+
41
  It achieves the following results on the evaluation set:
42
  - Loss: 0.7762
43
  - Macro F1: 0.7580
 
50
 
51
  ## Model description
52
 
53
+ This model, `mdeberta-v3-base-subjectivity-sentiment-multilingual`, is part of the AI Wizards' participation in the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. Its primary goal is to classify sentences as subjective (opinion-laden) or objective across monolingual, multilingual, and zero-shot settings. The model was evaluated on various languages including Arabic, German, English, Italian, Bulgarian (training/development) and unseen languages like Greek, Romanian, Polish, and Ukrainian (zero-shot evaluation).
54
+
55
+ The core innovation of this approach lies in enhancing transformer-based classifiers by integrating sentiment scores, derived from an auxiliary model, with sentence representations. This sentiment-augmented architecture aims to improve upon standard fine-tuning, particularly boosting the subjective F1 score. To address class imbalance, prevalent across languages, decision threshold calibration optimized on the development set was employed.
56
+
57
+ Key contributions from the associated paper include:
58
+ * **Sentiment-Augmented Fine-Tuning**: Enriching typical embedding-based models by integrating sentiment scores, significantly improving subjective sentence detection.
59
+ * **Diverse Model Coverage**: Benchmarking `mDeBERTaV3-base` (multilingual), `ModernBERT-base` (English), and `Llama3.2-1B` (zero-shot LLM baseline).
60
+ * **Threshold Calibration for Imbalance**: A simple yet effective method to tune decision thresholds on each language’s development data to enhance macro-F1 performance.
61
+
62
+ The framework led to high rankings, notably 1st for Greek (Macro F1 = 0.51).
63
 
64
  ## Intended uses & limitations
65
 
66
+ This model is intended for subjectivity detection in news articles, classifying sentences as subjective or objective. This task is crucial for combating misinformation, improving fact-checking pipelines, and supporting journalists. It is designed to be applicable in both monolingual and multilingual contexts, demonstrating robust generalization capabilities to unseen languages in zero-shot settings.
67
+
68
+ **Intended uses:**
69
+ * Classifying sentences in news articles as subjective or objective.
70
+ * As a component in misinformation detection and fact-checking systems.
71
+ * Assisting journalists in analyzing news content for bias or opinion.
72
+
73
+ **Limitations:**
74
+ * As noted by the authors, an initial mistake in the submission process led to some lower official multilingual Macro F1 scores (e.g., 0.24). Corrected results indicate significantly better performance (Macro F1 = 0.68), which would have placed the model higher (9th overall). Users should be aware of the corrected performance metrics.
75
+ * Performance may vary across different languages and specific domains beyond news articles, although the model showed strong generalization in zero-shot settings.
76
 
77
  ## Training and evaluation data
78
 
79
+ The model was fine-tuned on datasets provided for the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles.
80
+ Training and development datasets were provided for Arabic, German, English, Italian, and Bulgarian. For final evaluation, additional unseen languages such as Greek, Romanian, Polish, and Ukrainian were included to assess generalization capabilities. The training procedure involved integrating sentiment features and applying decision threshold calibration, optimized on the development sets, to mitigate class imbalance.
81
+
82
+ ## How to use
83
+
84
+ You can use this model directly with the Hugging Face `transformers` library to classify text:
85
+
86
+ ```python
87
+ import torch
88
+ import torch.nn as nn
89
+ from transformers import DebertaV2Model, DebertaV2Config, AutoTokenizer, PreTrainedModel, pipeline, AutoModelForSequenceClassification
90
+ from transformers.models.deberta.modeling_deberta import ContextPooler
91
+
92
+ sent_pipe = pipeline(
93
+ "sentiment-analysis",
94
+ model="cardiffnlp/twitter-xlm-roberta-base-sentiment",
95
+ tokenizer="cardiffnlp/twitter-xlm-roberta-base-sentiment",
96
+ top_k=None, # return all 3 sentiment scores
97
+ )
98
+
99
+ class CustomModel(PreTrainedModel):
100
+ config_class = DebertaV2Config
101
+ def __init__(self, config, sentiment_dim=3, num_labels=2, *args, **kwargs):
102
+ super().__init__(config, *args, **kwargs)
103
+ self.deberta = DebertaV2Model(config)
104
+ self.pooler = ContextPooler(config)
105
+ output_dim = self.pooler.output_dim
106
+ self.dropout = nn.Dropout(0.1)
107
+ self.classifier = nn.Linear(output_dim + sentiment_dim, num_labels)
108
+
109
+ def forward(self, input_ids, positive, neutral, negative, token_type_ids=None, attention_mask=None, labels=None):
110
+ outputs = self.deberta(input_ids=input_ids, attention_mask=attention_mask)
111
+ encoder_layer = outputs[0]
112
+ pooled_output = self.pooler(encoder_layer)
113
+ sentiment_features = torch.stack((positive, neutral, negative), dim=1).to(pooled_output.dtype)
114
+ combined_features = torch.cat((pooled_output, sentiment_features), dim=1)
115
+ logits = self.classifier(self.dropout(combined_features))
116
+ return {'logits': logits}
117
+
118
+ model_name = "MatteoFasulo/mdeberta-v3-base-subjectivity-sentiment-multilingual"
119
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/mdeberta-v3-base")
120
+ config = DebertaV2Config.from_pretrained(
121
+ model_name,
122
+ num_labels=2,
123
+ id2label={0: 'OBJ', 1: 'SUBJ'},
124
+ label2id={'OBJ': 0, 'SUBJ': 1},
125
+ output_attentions=False,
126
+ output_hidden_states=False
127
+ )
128
+ model = CustomModel(config=config, sentiment_dim=3, num_labels=2).from_pretrained(model_name)
129
+
130
+ def classify_subjectivity(text: str):
131
+ # A) get full sentiment distribution
132
+ dist = sent_pipe(text)[0]
133
+ pos = next(d["score"] for d in dist if d["label"] == "positive")
134
+ neu = next(d["score"] for d in dist if d["label"] == "neutral")
135
+ neg = next(d["score"] for d in dist if d["label"] == "negative")
136
+
137
+ # tokenize the text
138
+ inputs = tokenizer(text, padding=True, truncation=True, max_length=256, return_tensors='pt')
139
+
140
+ # feeding in the three sentiment scores
141
+ with torch.no_grad():
142
+ outputs = model(
143
+ input_ids=inputs["input_ids"],
144
+ attention_mask=inputs["attention_mask"],
145
+ positive=torch.tensor(pos).unsqueeze(0).float(),
146
+ neutral=torch.tensor(neu).unsqueeze(0).float(),
147
+ negative=torch.tensor(neg).unsqueeze(0).float()
148
+ )
149
+
150
+ # compute probabilities and pick the top label
151
+ probs = torch.softmax(outputs.get('logits')[0], dim=-1)
152
+ label = model.config.id2label[int(probs.argmax())]
153
+ score = probs.max().item()
154
+
155
+ return {"label": label, "score": score}
156
+
157
+ examples = [
158
+ "The company reported a 10% increase in revenue for the last quarter.",
159
+ "Die angegebenen Fehlerquoten können daher nur für symptomatische Patienten gelten.",
160
+ "Si smonta qui definitivamente la narrazione per cui le scelte energetiche possono essere frutto esclusivo di valutazioni “tecniche” e non politiche.",
161
+ ]
162
+ for text in examples:
163
+ result = classify_subjectivity(text)
164
+ print(f"Text: {text}")
165
+ print(f"→ Subjectivity: {result['label']} (score={result['score']:.2f})\n")
166
+ ```
167
 
168
  ## Training procedure
169
 
 
189
  | 0.3579 | 5.0 | 2010 | 0.7443 | 0.7476 | 0.7485 | 0.7614 | 0.7154 | 0.6440 | 0.8045 | 0.7518 |
190
  | 0.3579 | 6.0 | 2412 | 0.7762 | 0.7580 | 0.7558 | 0.7614 | 0.7100 | 0.6878 | 0.7336 | 0.7676 |
191
 
 
192
  ### Framework versions
193
 
194
  - Transformers 4.49.0
195
  - Pytorch 2.5.1+cu121
196
  - Datasets 3.3.1
197
+ - Tokenizers 0.21.0
198
+
199
+ ## Citation
200
+
201
+ If you find our work helpful or inspiring, please feel free to cite it:
202
+
203
+ ```bibtex
204
+ @misc{fasulo2025aiwizardscheckthat2025,
205
+ title={AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles},
206
+ author={Matteo Fasulo and Luca Babboni and Luca Tedeschini},
207
+ year={2025},
208
+ eprint={2507.11764},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.CL},
211
+ url={https://arxiv.org/abs/2507.11764},
212
+ }
213
+ ```