Graph2Eval: Automatic Multimodal Task Generation for Agents via Knowledge Graphs
Abstract
Graph2Eval, a knowledge graph-based framework, generates multimodal and interactive tasks to comprehensively evaluate agents' reasoning, collaboration, and web interaction capabilities.
As multimodal LLM-driven agents continue to advance in autonomy and generalization, evaluation based on static datasets can no longer adequately assess their true capabilities in dynamic environments and diverse tasks. Existing LLM-based synthetic data methods are largely designed for LLM training and evaluation, and thus cannot be directly applied to agent tasks that require tool use and interactive capabilities. While recent studies have explored automatic agent task generation with LLMs, most efforts remain limited to text or image analysis, without systematically modeling multi-step interactions in web environments. To address these challenges, we propose Graph2Eval, a knowledge graph-based framework that automatically generates both multimodal document comprehension tasks and web interaction tasks, enabling comprehensive evaluation of agents' reasoning, collaboration, and interactive capabilities. In our approach, knowledge graphs constructed from multi-source external data serve as the task space, where we translate semantic relations into structured multimodal tasks using subgraph sampling, task templates, and meta-paths. A multi-stage filtering pipeline based on node reachability, LLM scoring, and similarity analysis is applied to guarantee the quality and executability of the generated tasks. Furthermore, Graph2Eval supports end-to-end evaluation of multiple agent types (Single-Agent, Multi-Agent, Web Agent) and measures reasoning, collaboration, and interaction capabilities. We instantiate the framework with Graph2Eval-Bench, a curated dataset of 1,319 tasks spanning document comprehension and web interaction scenarios. Experiments show that Graph2Eval efficiently generates tasks that differentiate agent and model performance, revealing gaps in reasoning, collaboration, and web interaction across different settings and offering a new perspective for agent evaluation.
Community
In this work, we presented Graph2Eval, an automatic task generation framework that leverages knowledge graphs as an intermediate representation. By systematically modeling entities and their relationships within documents and web data, this framework integrates multi-source data into a unified task space, enabling scalable task creation to evaluate agent capabilities across diverse scenarios. Based on this framework, we constructed the Graph2Eval-Bench dataset. Experimental results demonstrate that Graph2Eval effectively generates tasks spanning a wide range of scenarios, and comparative studies on models of varying scales and agent types confirm that these tasks reliably assess document comprehension and web interaction abilities under different settings.
Code Repository: https://github.com/YurunChen/Graph2Eval
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DSRAG: A Domain-Specific Retrieval Framework Based on Document-derived Multimodal Knowledge Graph (2025)
- DeepMEL: A Multi-Agent Collaboration Framework for Multimodal Entity Linking (2025)
- Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval (2025)
- KG-RAG: Enhancing GUI Agent Decision-Making via Knowledge Graph-Driven Retrieval-Augmented Generation (2025)
- G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge (2025)
- OIG-Bench: A Multi-Agent Annotated Benchmark for Multimodal One-Image Guides Understanding (2025)
- MoLoRAG: Bootstrapping Document Understanding via Multi-modal Logic-aware Retrieval (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper