Edit model card

PropagandaDetection

The model is a Transformer network based on a DistilBERT pre-trained model. The pre-trained model is fine-tuned on the SemEval 2023 Task 3 training dataset for the propaganda detection task.

Hyperparameters :

Batch size = 16; Learning rate = 2e-5; AdamW optimizer; Epochs = 4.

Accuracy = 90 % on SemEval 2023 test set.

References

@inproceedings{bangerter2023unisa,
  title={Unisa at SemEval-2023 task 3: a shap-based method for propaganda detection},
  author={Bangerter, Micaela and Fenza, Giuseppe and Gallo, Mariacristina and Loia, Vincenzo and Volpe, Alberto and De Maio, Carmen and Stanzione, Claudio},
  booktitle={Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)},
  pages={885--891},
  year={2023}
}
Downloads last month
278
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.