A Unified Agentic Framework for Evaluating Conditional Image Generation
Abstract
Conditional image generation has gained significant attention for its ability to personalize content. However, the field faces challenges in developing task-agnostic, reliable, and explainable evaluation metrics. This paper introduces CIGEval, a unified agentic framework for comprehensive evaluation of conditional image generation tasks. CIGEval utilizes large multimodal models (LMMs) as its core, integrating a multi-functional toolbox and establishing a fine-grained evaluation framework. Additionally, we synthesize evaluation trajectories for fine-tuning, empowering smaller LMMs to autonomously select appropriate tools and conduct nuanced analyses based on tool outputs. Experiments across seven prominent conditional image generation tasks demonstrate that CIGEval (GPT-4o version) achieves a high correlation of 0.4625 with human assessments, closely matching the inter-annotator correlation of 0.47. Moreover, when implemented with 7B open-source LMMs using only 2.3K training trajectories, CIGEval surpasses the previous GPT-4o-based state-of-the-art method. Case studies on GPT-4o image generation highlight CIGEval's capability in identifying subtle issues related to subject consistency and adherence to control guidance, indicating its great potential for automating evaluation of image generation tasks with human-level reliability.
Community
Introducing CIGEVAL, a unified framework for comprehensive evaluation of conditional image generation tasks using large multimodal models (LMMs). ๐ผ๏ธ
๐ Achieves human-level reliability in automating image generation evaluations.
๐ฅ Surpasses previous GPT-4o-based methods, showing potential in identifying subtle image issues.
๐๏ธ Highlights GPT-4o image generation's strengths & weaknesses about multi-image tasks and adherence to control guidance.
Check it out ๐ Data & Code
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing (2025)
- GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning (2025)
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models (2025)
- UVE: Are MLLMs Unified Evaluators for AI-Generated Videos? (2025)
- Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think (2025)
- Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing (2025)
- T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image Evaluation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper