MatteoFasulo commited on
Commit
b6fe09b
·
verified ·
1 Parent(s): bfc8af5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -23
README.md CHANGED
@@ -15,6 +15,8 @@ tags:
15
  model-index:
16
  - name: ModernBERT-base-subjectivity-english
17
  results: []
 
 
18
  ---
19
 
20
  # ModernBERT-base-subjectivity-english
@@ -52,28 +54,22 @@ The `ModernBERT-base-subjectivity-english` model was fine-tuned on the English p
52
  You can use this model directly with the `transformers` library for text classification:
53
 
54
  ```python
55
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
56
- import torch
57
-
58
- model_name = "MatteoFasulo/ModernBERT-base-subjectivity-english"
59
- tokenizer = AutoTokenizer.from_pretrained(model_name)
60
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
61
-
62
- # Example text
63
- text = "The new policy is an absolute disaster for the economy."
64
-
65
- # Tokenize and perform inference
66
- inputs = tokenizer(text, return_tensors="pt")
67
- with torch.no_grad():
68
- logits = model(**inputs).logits
69
-
70
- # Get predicted class (0 for OBJ, 1 for SUBJ as per model config)
71
- predicted_class_id = logits.argmax().item()
72
- labels = model.config.id2label # Access the label mapping from model config
73
- predicted_label = labels[predicted_class_id]
74
-
75
- print(f"Text: '{text}'")
76
- print(f"Predicted label: {predicted_label}")
77
  ```
78
 
79
  ## Training procedure
@@ -106,4 +102,25 @@ The following hyperparameters were used during training:
106
  - Transformers 4.49.0
107
  - Pytorch 2.5.1+cu121
108
  - Datasets 3.3.1
109
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  model-index:
16
  - name: ModernBERT-base-subjectivity-english
17
  results: []
18
+ datasets:
19
+ - MatteoFasulo/clef2025_checkthat_task1_subjectivity
20
  ---
21
 
22
  # ModernBERT-base-subjectivity-english
 
54
  You can use this model directly with the `transformers` library for text classification:
55
 
56
  ```python
57
+ from transformers import pipeline
58
+
59
+ # Load the text classification pipeline
60
+ classifier = pipeline(
61
+ "text-classification",
62
+ model="MatteoFasulo/ModernBERT-base-subjectivity-english",
63
+ tokenizer="answerdotai/ModernBERT-base",
64
+ )
65
+
66
+ text1 = "The company reported a 10% increase in profits in the last quarter."
67
+ result1 = classifier(text1)
68
+ print(f"Text: '{text1}' Classification: {result1}")
69
+
70
+ text2 = "This product is absolutely amazing and everyone should try it!"
71
+ result2 = classifier(text2)
72
+ print(f"Text: '{text2}' Classification: {result2}")
 
 
 
 
 
 
73
  ```
74
 
75
  ## Training procedure
 
102
  - Transformers 4.49.0
103
  - Pytorch 2.5.1+cu121
104
  - Datasets 3.3.1
105
+ - Tokenizers 0.21.0
106
+
107
+ ## Code
108
+
109
+ The official code and materials for this submission are available on GitHub:
110
+ [https://github.com/MatteoFasulo/clef2025-checkthat](https://github.com/MatteoFasulo/clef2025-checkthat)
111
+
112
+ ## Citation
113
+
114
+ If you find our work helpful or inspiring, please feel free to cite it:
115
+
116
+ ```bibtex
117
+ @misc{fasulo2025aiwizardscheckthat2025,
118
+ title={AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles},
119
+ author={Matteo Fasulo and Luca Babboni and Luca Tedeschini},
120
+ year={2025},
121
+ eprint={2507.11764},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.CL},
124
+ url={https://arxiv.org/abs/2507.11764},
125
+ }
126
+ ```