Papers
arxiv:2509.21710

Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval

Published on Sep 26
· Submitted by wuxiaojun on Sep 29
Authors:
,
,
,
,
,
,
,

Abstract

ToG-3, a novel framework, enhances LLMs with external knowledge using a dynamic, multi-agent system that evolves queries and subgraphs for precise evidence retrieval and reasoning.

AI-generated summary

Retrieval-Augmented Generation (RAG) and Graph-based RAG has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge. However, existing approaches face a fundamental trade-off. While graph-based methods are inherently dependent on high-quality graph structures, they face significant practical constraints: manually constructed knowledge graphs are prohibitively expensive to scale, while automatically extracted graphs from corpora are limited by the performance of the underlying LLM extractors, especially when using smaller, local-deployed models. This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context Evolution and Retrieval (MACER) mechanism to overcome these limitations. Our core innovation is the dynamic construction and refinement of a Chunk-Triplets-Community heterogeneous graph index, which pioneeringly incorporates a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval. This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query. A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively engages in an iterative process of evidence retrieval, answer generation, sufficiency reflection, and, crucially, evolving query and subgraph. This dual-evolving multi-agent system allows ToG-3 to adaptively build a targeted graph index during reasoning, mitigating the inherent drawbacks of static, one-time graph construction and enabling deep, precise reasoning even with lightweight LLMs. Extensive experiments demonstrate that ToG-3 outperforms compared baselines on both deep and broad reasoning benchmarks, and ablation studies confirm the efficacy of the components of MACER framework.

Community

Paper author Paper submitter
edited 2 days ago

This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context
Evolution and Retrieval (MACER)
mechanism, which pioneeringly incorporates Chunk-Triplets-Community heterogeneous graph and a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval.
This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query.
Extensive experiments demonstrate that ToG-3 outperforms compared baselines on both Deep and Broad Reasoning benchmarks, and ablation studies confirm the efficacy of the components of MACER framework.

Paper author Paper submitter
edited 2 days ago

Visualization Example of Chunk-Triplets-Community Heterogeneous Graph via Neo4j community edition:
graph_of_broad_knowledge_reasoning_task (1)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.21710 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.21710 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.21710 in a Space README.md to link it from this page.

Collections including this paper 2