Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection
This repository contains a PEFT fine-tuned Large Multimodal Model (LMM) for hateful meme detection, as presented in the paper Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection.
Model Details
Model Description
Hateful memes have become a significant concern on the Internet, necessitating robust automated detection systems. While Large Multimodal Models (LMMs) have shown promise in hateful meme detection, they face notable challenges like sub-optimal performance and limited out-of-domain generalization capabilities. Recent studies further reveal the limitations of both supervised fine-tuning (SFT) and in-context learning when applied to LMMs in this setting. To address these issues, this work proposes a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Analysis reveals that this approach achieves improved robustness under adversarial attacks compared to SFT models. Experiments on six meme classification datasets show that this approach achieves state-of-the-art performance, outperforming larger agentic systems. Moreover, the method generates higher-quality rationales for explaining hateful content compared to standard SFT, enhancing model interpretability.
- Developed by: Jingbiao Mei, Jinghong Chen, Guangyu Yang, Weizhe Lin, Bill Byrne
- Model type: Large Multimodal Model (LMM), fine-tuned using PEFT (LoRA) for hateful meme detection.
- Language(s) (NLP): English
- License: cc-by-4.0
- Finetuned from model: QWen/QWen2-VL-7B-Instruct
Model Sources
Uses
Direct Use
This model is intended for the robust detection of hateful memes. It can be used to classify multimodal content (image and text) for hate speech, offering improved accuracy and cross-domain generalization. It also provides rationales for its classifications, aiding interpretability.
Out-of-Scope Use
This model should not be used for generating hateful content, propagating misinformation, or any other malicious purposes. It is a detection tool and its application should align with ethical AI principles for combating harmful online content.
Bias, Risks, and Limitations
All AI models, especially those dealing with sensitive content like hate speech, may exhibit biases from their training data or limitations in understanding complex nuances, sarcasm, or evolving slang. This could lead to misclassifications or biased explanations.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Continuous monitoring, human oversight in critical applications, and further evaluation on diverse and evolving datasets are recommended.
How to Get Started with the Model
This model is a PEFT (Parameter-Efficient Fine-Tuning) adapter built on top of QWen/QWen2-VL-7B-Instruct
. To use it, you typically load the base model and its tokenizer using the Hugging Face Transformers library, and then load this model as a PEFT adapter.
Training Details
Training Data
The model was trained and evaluated on six meme classification datasets, as mentioned in the paper.
Training Procedure
The paper proposes a robust adaptation framework that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. The training involved fine-tuning using PEFT (LoRA).
Training Hyperparameters
- Training regime: Based on the base model, Qwen2-VL-7B-Instruct often uses mixed precision (e.g., bfloat16).
- Epochs: 3
- Train Batch Size: 16
Evaluation
Testing Data, Factors & Metrics
Testing Data
The model was evaluated on six meme classification datasets.
Metrics
The evaluation metrics recorded during training include:
accuracy
auroc
(Area Under the Receiver Operating Characteristic curve)f1
(F1 Score)precision
recall
Results
The proposed approach achieved state-of-the-art performance across six meme classification datasets, outperforming larger agentic systems. It also demonstrated improved robustness under adversarial attacks and generated higher-quality rationales compared to standard SFT models.
Citation
If you find this work helpful, please consider citing the following papers:
@inproceedings{RGCL2024Mei,
title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
author = "Mei, Jingbiao and
Chen, Jinghong and
Lin, Weizhe and
Byrne, Bill and
Tomalin, Marcus",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.291",
doi = "10.18653/v1/2024.acl-long.291",
pages = "5333--5347"
}
@article{RAHMD2025Mei, title={Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection},
url={http://arxiv.org/abs/2502.13061},
DOI={10.48550/arXiv.2502.13061},
note={arXiv:2502.13061 [cs]},
number={arXiv:2502.13061},
publisher={arXiv},
author={Mei, Jingbiao and Chen, Jinghong and Yang, Guangyu and Lin, Weizhe and Byrne, Bill},
year={2025},
month=may }
Framework versions
- PEFT 0.12.0