MatteoFasulo nielsr HF Staff commited on
Commit
5316a51
·
verified ·
1 Parent(s): ca9b6fc

Improve model card: Add pipeline tag, update license and tags, expand content and usage (#1)

Browse files

- Improve model card: Add pipeline tag, update license and tags, expand content and usage (c892ba17f06184748b6fe02ab2a4d7184d142bae)
- Update README.md (68c27ef44c549caa3109f25269b0a6548b453406)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +72 -15
README.md CHANGED
@@ -1,25 +1,31 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
  base_model: microsoft/mdeberta-v3-base
5
- tags:
6
- - generated_from_trainer
 
 
7
  metrics:
8
  - accuracy
9
  - f1
 
 
 
 
 
 
10
  model-index:
11
  - name: mdeberta-v3-base-subjectivity-bulgarian
12
  results: []
13
- language:
14
- - bg
15
  ---
16
 
17
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
- should probably proofread and complete it, then remove this comment. -->
19
-
20
  # mdeberta-v3-base-subjectivity-bulgarian
21
 
22
- This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025](arxiv.org/abs/2507.11764).
 
 
 
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.5111
25
  - Macro F1: 0.7869
@@ -32,15 +38,27 @@ It achieves the following results on the evaluation set:
32
 
33
  ## Model description
34
 
35
- More information needed
 
 
36
 
37
  ## Intended uses & limitations
38
 
39
- More information needed
 
 
 
 
 
 
 
 
40
 
41
  ## Training and evaluation data
42
 
43
- More information needed
 
 
44
 
45
  ## Training procedure
46
 
@@ -66,10 +84,49 @@ The following hyperparameters were used during training:
66
  | No log | 5.0 | 230 | 0.5065 | 0.7728 | 0.7835 | 0.7696 | 0.7315 | 0.7966 | 0.6763 | 0.7803 |
67
  | No log | 6.0 | 276 | 0.5111 | 0.7869 | 0.7949 | 0.7839 | 0.7510 | 0.8033 | 0.7050 | 0.7930 |
68
 
69
-
70
  ### Framework versions
71
 
72
  - Transformers 4.50.0
73
  - Pytorch 2.5.1+cu121
74
  - Datasets 3.3.1
75
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model: microsoft/mdeberta-v3-base
3
+ language:
4
+ - bg
5
+ library_name: transformers
6
+ license: cc-by-4.0
7
  metrics:
8
  - accuracy
9
  - f1
10
+ tags:
11
+ - generated_from_trainer
12
+ - deberta
13
+ - multilingual
14
+ - subjectivity-detection
15
+ pipeline_tag: text-classification
16
  model-index:
17
  - name: mdeberta-v3-base-subjectivity-bulgarian
18
  results: []
19
+ datasets:
20
+ - MatteoFasulo/clef2025_checkthat_task1_subjectivity
21
  ---
22
 
 
 
 
23
  # mdeberta-v3-base-subjectivity-bulgarian
24
 
25
+ This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) for **Subjectivity Detection in News Articles**. It was presented by AI Wizards in the paper [AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles](https://arxiv.org/abs/2507.11764) as part of the [CLEF 2025 CheckThat! Lab Task 1](https://huggingface.co/papers/2507.11764).
26
+
27
+ The official code and materials for this project can be found on the [GitHub repository](https://github.com/MatteoFasulo/clef2025-checkthat).
28
+
29
  It achieves the following results on the evaluation set:
30
  - Loss: 0.5111
31
  - Macro F1: 0.7869
 
38
 
39
  ## Model description
40
 
41
+ This model identifies whether a sentence is **subjective** (e.g., opinion-laden) or **objective**. This task is a key component in combating misinformation, improving fact-checking pipelines, and supporting journalists. This specific checkpoint is fine-tuned for the Bulgarian language.
42
+
43
+ The primary strategy behind this model involves enhancing transformer-based classifiers (specifically mDeBERTaV3-base) by integrating sentiment scores, derived from an auxiliary model, with sentence representations. This aims to improve upon standard fine-tuning, particularly boosting subjective F1 score. To address class imbalance prevalent across languages, decision threshold calibration optimized on the development set was employed. The research achieved high rankings in the CLEF 2025 CheckThat! Lab Task 1, notably ranking 1st for Greek (zero-shot, Macro F1 = 0.51) and securing 1st–4th place in most monolingual settings.
44
 
45
  ## Intended uses & limitations
46
 
47
+ **Intended Uses:**
48
+ This model is intended for research and practical applications focused on subjectivity detection in news articles, particularly for distinguishing subjective (opinion-laden) from objective content. It can be particularly useful in:
49
+ * Combating misinformation by identifying opinionated content.
50
+ * Improving fact-checking pipelines.
51
+ * Supporting journalists in content analysis and understanding bias.
52
+
53
+ **Limitations:**
54
+ * While the overarching research explored multilingual and zero-shot settings, this specific model checkpoint is fine-tuned for Bulgarian. Its performance might vary when applied to other languages or domains not represented in the training data without further fine-tuning.
55
+ * The paper notes that an initial submission quirk led to skewed class distribution and under-calibrated thresholds; the reported results reflect the corrected evaluation. Users should be aware of potential nuances when applying the model to data with significantly different class distributions.
56
 
57
  ## Training and evaluation data
58
 
59
+ This model was trained and evaluated as part of the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. Training and development datasets were provided for Arabic, German, English, Italian, and Bulgarian. The final evaluation included additional unseen languages such as Greek, Romanian, Polish, and Ukrainian to assess generalization capabilities.
60
+
61
+ To address class imbalance, a common issue across these languages, a decision threshold calibration optimized on the development set was employed. More details on the datasets and experimental setup can be found in the [paper](https://arxiv.org/abs/2507.11764) and the [GitHub repository](https://github.com/MatteoFasulo/clef2025-checkthat).
62
 
63
  ## Training procedure
64
 
 
84
  | No log | 5.0 | 230 | 0.5065 | 0.7728 | 0.7835 | 0.7696 | 0.7315 | 0.7966 | 0.6763 | 0.7803 |
85
  | No log | 6.0 | 276 | 0.5111 | 0.7869 | 0.7949 | 0.7839 | 0.7510 | 0.8033 | 0.7050 | 0.7930 |
86
 
 
87
  ### Framework versions
88
 
89
  - Transformers 4.50.0
90
  - Pytorch 2.5.1+cu121
91
  - Datasets 3.3.1
92
+ - Tokenizers 0.21.0
93
+
94
+ ## How to use
95
+
96
+ You can use this model directly with the Hugging Face `transformers` library for text classification:
97
+
98
+ ```python
99
+ from transformers import pipeline
100
+
101
+ # Load the text classification pipeline
102
+ classifier = pipeline(
103
+ "text-classification",
104
+ model="MatteoFasulo/mdeberta-v3-base-subjectivity-bulgarian",
105
+ tokenizer="microsoft/mdeberta-v3-base",
106
+ )
107
+
108
+ # Example usage:
109
+ result1 = classifier("По принцип никой не иска войни, но за нещастие те се случват.")
110
+ print(f"Classification: {result1}")
111
+ # Expected output: [{'label': 'SUBJ', 'score': ...}]
112
+
113
+ result2 = classifier("В един момент започнал сам да търси изход за своето спасение и здраве")
114
+ print(f"Classification: {result2}")
115
+ # Expected output: [{'label': 'OBJ', 'score': ...}]
116
+ ```
117
+
118
+ ## Citation
119
+
120
+ If you find our work helpful or inspiring, please feel free to cite it:
121
+
122
+ ```bibtex
123
+ @misc{fasulo2025aiwizardscheckthat2025,
124
+ title={AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles},
125
+ author={Matteo Fasulo and Luca Babboni and Luca Tedeschini},
126
+ year={2025},
127
+ eprint={2507.11764},
128
+ archivePrefix={arXiv},
129
+ primaryClass={cs.CL},
130
+ url={https://arxiv.org/abs/2507.11764},
131
+ }
132
+ ```