Papers
arxiv:2505.16293

Augmenting LLM Reasoning with Dynamic Notes Writing for Complex QA

Published on May 22
· Submitted by rmahesh on May 26
Authors:
,
,
,
,
,
,

Abstract

Notes Writing enhances iterative RAG by generating concise notes at each step, improving reasoning and performance while minimizing output increase.

AI-generated summary

Iterative RAG for multi-hop question answering faces challenges with lengthy contexts and the buildup of irrelevant information. This hinders a model's capacity to process and reason over retrieved content and limits performance. While recent methods focus on compressing retrieved information, they are either restricted to single-round RAG, require finetuning or lack scalability in iterative RAG. To address these challenges, we propose Notes Writing, a method that generates concise and relevant notes from retrieved documents at each step, thereby reducing noise and retaining only essential information. This indirectly increases the effective context length of Large Language Models (LLMs), enabling them to reason and plan more effectively while processing larger volumes of input text. Notes Writing is framework agnostic and can be integrated with different iterative RAG methods. We demonstrate its effectiveness with three iterative RAG methods, across two models and four evaluation datasets. Notes writing yields an average improvement of 15.6 percentage points overall, with minimal increase in output tokens.

Community

Paper author Paper submitter

Iterative RAG for multi-hop QA faces challenges with lengthy contexts and buildup of irrelevant information. We propose NotesWriting that generates concise notes from the retrieved documents at each step, thereby reducing noise and retaining only essential information. This indirectly increases the effective context length of LLMs, enabling them to reason and plan more effectively while processing larger volumes of input text.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.16293 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.16293 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.16293 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.