Abstract
InteractComp evaluates search agents' ability to recognize and resolve query ambiguity through interaction, revealing significant gaps in current models' capabilities.
Language agents have demonstrated remarkable potential in web search and information retrieval. However, these search agents assume user queries are complete and unambiguous, an assumption that diverges from reality where users begin with incomplete queries requiring clarification through interaction. Yet most agents lack interactive mechanisms during the search process, and existing benchmarks cannot assess this capability. To address this gap, we introduce InteractComp, a benchmark designed to evaluate whether search agents can recognize query ambiguity and actively interact to resolve it during search. Following the principle of easy to verify, interact to disambiguate, we construct 210 expert-curated questions across 9 domains through a target-distractor methodology that creates genuine ambiguity resolvable only through interaction. Evaluation of 17 models reveals striking failure: the best model achieves only 13.73% accuracy despite 71.50% with complete context, exposing systematic overconfidence rather than reasoning deficits. Forced interaction produces dramatic gains, demonstrating latent capability current strategies fail to engage. Longitudinal analysis shows interaction capabilities stagnated over 15 months while search performance improved seven-fold, revealing a critical blind spot. This stagnation, coupled with the immediate feedback inherent to search tasks, makes InteractComp a valuable resource for both evaluating and training interaction capabilities in search agents. The code is available at https://github.com/FoundationAgents/InteractComp.
Community
🚀 InteractComp exposes a key blind spot in today’s AI search agents — they can search and reason, but they can’t ask. This new benchmark of 210 ambiguous, expert-curated queries shows that even GPT-5 scores only 13.7% when left to decide on its own, yet doubles accuracy when forced to ask. While search performance has improved seven-fold, interaction remains stagnant — true intelligence begins by asking before answering.
Pardon my ignorance guys, but I remember seeing a link that would explain the relevance of these papers for non-technical users around the comments section. I don't see them anymore!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
 
					 
					 
					