MARBERT Sarcasm Detector
This model is fine-tuned UBC-NLP/MARBERTv2 was finetuned on ArSarcasT corpus training dataset. It achieves the following results on the evaluation sets:
Eval Datatset | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|
ArSarcasT | 0.844 | 0.735 | 0.754 | 0.718 |
iSarcasmEVAL | 0.892 | 0.633 | 0.616 | 0.650 |
ArSarcasmV2 | 0.771 | 0.561 | 0.590 | 0.534 |
Model description
Fine-tuned MARBERT-v2 model on Sarcastic tweets dataset for sarcasm detection text classification.
Intended uses & limitations
More information needed
Training and evaluation data
- Training dataset: ArSarcasT development split.
- Evaluation Datasets:
- ArSarcasm-v2 test dataset.
- iSarcasmEVAL test dataset.
- ArSarcasT test dataset.
Training procedure
Fine-tuning, 3 epochs
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
Training results
Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
Paper Citation
If you use this fine-tuned model based on the original MARBERT model, please cite the following paper: Galal, M. A., Yousef, A. H., Zayed, H. H., & Medhat, W. (2024). Arabic sarcasm detection: An enhanced fine-tuned language model approach. Ain Shams Engineering Journal, 15(6), 102736. https://doi.org/10.1016/j.asej.2024.102736 (https://www.sciencedirect.com/science/article/pii/S2090447924001114)
Dataset Repository
- Downloads last month
- 34