Papers
arxiv:2510.00507

Graph2Eval: Automatic Multimodal Task Generation for Agents via Knowledge Graphs

Published on Oct 1
· Submitted by Yurun Chen on Oct 7
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Graph2Eval, a knowledge graph-based framework, generates multimodal and interactive tasks to comprehensively evaluate agents' reasoning, collaboration, and web interaction capabilities.

AI-generated summary

As multimodal LLM-driven agents continue to advance in autonomy and generalization, evaluation based on static datasets can no longer adequately assess their true capabilities in dynamic environments and diverse tasks. Existing LLM-based synthetic data methods are largely designed for LLM training and evaluation, and thus cannot be directly applied to agent tasks that require tool use and interactive capabilities. While recent studies have explored automatic agent task generation with LLMs, most efforts remain limited to text or image analysis, without systematically modeling multi-step interactions in web environments. To address these challenges, we propose Graph2Eval, a knowledge graph-based framework that automatically generates both multimodal document comprehension tasks and web interaction tasks, enabling comprehensive evaluation of agents' reasoning, collaboration, and interactive capabilities. In our approach, knowledge graphs constructed from multi-source external data serve as the task space, where we translate semantic relations into structured multimodal tasks using subgraph sampling, task templates, and meta-paths. A multi-stage filtering pipeline based on node reachability, LLM scoring, and similarity analysis is applied to guarantee the quality and executability of the generated tasks. Furthermore, Graph2Eval supports end-to-end evaluation of multiple agent types (Single-Agent, Multi-Agent, Web Agent) and measures reasoning, collaboration, and interaction capabilities. We instantiate the framework with Graph2Eval-Bench, a curated dataset of 1,319 tasks spanning document comprehension and web interaction scenarios. Experiments show that Graph2Eval efficiently generates tasks that differentiate agent and model performance, revealing gaps in reasoning, collaboration, and web interaction across different settings and offering a new perspective for agent evaluation.

Community

Paper submitter

In this work, we presented Graph2Eval, an automatic task generation framework that leverages knowledge graphs as an intermediate representation. By systematically modeling entities and their relationships within documents and web data, this framework integrates multi-source data into a unified task space, enabling scalable task creation to evaluate agent capabilities across diverse scenarios. Based on this framework, we constructed the Graph2Eval-Bench dataset. Experimental results demonstrate that Graph2Eval effectively generates tasks spanning a wide range of scenarios, and comparative studies on models of varying scales and agent types confirm that these tasks reliably assess document comprehension and web interaction abilities under different settings.

Code Repository: https://github.com/YurunChen/Graph2Eval

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.00507 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.00507 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.00507 in a Space README.md to link it from this page.

Collections including this paper 1