Deceptive Humor: A Synthetic Multilingual Benchmark Dataset for Bridging Fabricated Claims with Humorous Content
Abstract
This paper presents the Deceptive Humor Dataset (DHD), a novel resource for studying humor derived from fabricated claims and misinformation. In an era of rampant misinformation, understanding how humor intertwines with deception is essential. DHD consists of humor-infused comments generated from false narratives, incorporating fabricated claims and manipulated information using the ChatGPT-4o model. Each instance is labeled with a Satire Level, ranging from 1 for subtle satire to 3 for high-level satire and classified into five distinct Humor Categories: Dark Humor, Irony, Social Commentary, Wordplay, and Absurdity. The dataset spans multiple languages including English, Telugu, Hindi, Kannada, Tamil, and their code-mixed variants (Te-En, Hi-En, Ka-En, Ta-En), making it a valuable multilingual benchmark. By introducing DHD, we establish a structured foundation for analyzing humor in deceptive contexts, paving the way for a new research direction that explores how humor not only interacts with misinformation but also influences its perception and spread. We establish strong baselines for the proposed dataset, providing a foundation for future research to benchmark and advance deceptive humor detection models.
Community
This work introduces Deceptive Humor as a novel research problem at the intersection of humor and misinformation, which remains largely unexplored. Unlike standard humor detection or fact-checking tasks, deceptive humor blends fabricated claims with comedic elements, making it more challenging to detect and potentially more influential in misinformation spread. Detecting deceptive humor is particularly difficult because models must possess both humor understanding and fact-checking capabilities. Additionally, we generate data from recently trending fabricated claims, further increasing the challenge for existing models. To advance research in this direction, we present DHD, a multilingual benchmark dataset for deceptive humor, providing a structured foundation for studying this complex phenomenon and improving fact-aware humor detection in NLP.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Irony Detection, Reasoning and Understanding in Zero-shot Learning (2025)
- How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation (2025)
- Worse than Zero-shot? A Fact-Checking Dataset for Evaluating the Robustness of RAG Against Misleading Retrievals (2025)
- FanChuan: A Multilingual and Graph-Structured Benchmark For Parody Detection and Analysis (2025)
- Evaluation of Hate Speech Detection Using Large Language Models and Geographical Contextualization (2025)
- Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection? (2025)
- Detecting Offensive Memes with Social Biases in Singapore Context Using Multimodal Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper