LEGION: Learning to Ground and Explain for Synthetic Image Detection
Abstract
The rapid advancements in generative technology have emerged as a double-edged sword. While offering powerful tools that enhance convenience, they also pose significant social concerns. As defenders, current synthetic image detection methods often lack artifact-level textual interpretability and are overly focused on image manipulation detection, and current datasets usually suffer from outdated generators and a lack of fine-grained annotations. In this paper, we introduce SynthScars, a high-quality and diverse dataset consisting of 12,236 fully synthetic images with human-expert annotations. It features 4 distinct image content types, 3 categories of artifacts, and fine-grained annotations covering pixel-level segmentation, detailed textual explanations, and artifact category labels. Furthermore, we propose LEGION (LEarning to Ground and explain for Synthetic Image detectiON), a multimodal large language model (MLLM)-based image forgery analysis framework that integrates artifact detection, segmentation, and explanation. Building upon this capability, we further explore LEGION as a controller, integrating it into image refinement pipelines to guide the generation of higher-quality and more realistic images. Extensive experiments show that LEGION outperforms existing methods across multiple benchmarks, particularly surpassing the second-best traditional expert on SynthScars by 3.31% in mIoU and 7.75% in F1 score. Moreover, the refined images generated under its guidance exhibit stronger alignment with human preferences. The code, model, and dataset will be released.
Community
We explore the fully synthetic forgery analysis task and introduce SynthScars, a challenging and finely annotated forged image dataset. Additionally, we propose LEGION, a framework supporting three subtasks—artifact localization, explanation generation, and forgery detection. LEGION's detailed feedback further guides image regeneration and inpainting pipelines, promoting higher-quality and more realistic image generation.
Our main contributions are as follows:
- We introduce SynthScars, a challenging dataset for synthetic image detection, featuring high-quality synthetic images with diverse content types, as well as fine-grained pixel-level artifact annotations with detailed textual explanations.
- We propose LEGION, a comprehensive image forgery analysis framework for artifact localization, explanation generation, and forgery detection, which effectively aids human experts to detect and understand image forgeries.
- Extensive experiments demonstrate that LEGION achieves exceptional performance on 4 challenging benchmarks. Comparisons with 19 existing methods show that it achieves state-of-the-art performance on the vast majority of metrics, exhibiting strong robustness and generalization ability.
- We position LEGION not only as a defender against ever-evolving generative technologies but also as a controller that guides higher-quality and more realistic image generation. Qualitative and quantitative experiments on image regeneration and inpainting show the great value of LEGION in providing feedbacks for progressive artifact refinement.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards General Visual-Linguistic Face Forgery Detection(V2) (2025)
- GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding (2025)
- AnomalyPainter: Vision-Language-Diffusion Synergy for Zero-Shot Realistic and Diverse Industrial Anomaly Synthesis (2025)
- SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories (2025)
- Evaluating and Predicting Distorted Human Body Parts for Generated Images (2025)
- MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input (2025)
- REAL: Realism Evaluation of Text-to-Image Generation Models for Effective Data Augmentation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper