SHANKS: Simultaneous Hearing and Thinking for Spoken Language Models
Abstract
SHANKS, a general inference framework, enables spoken language models to generate unspoken reasoning while listening to user input, enhancing real-time interaction and task completion.
Current large language models (LLMs) and spoken language models (SLMs) begin thinking and taking actions only after the user has finished their turn. This prevents the model from interacting during the user's turn and can lead to high response latency while it waits to think. Consequently, thinking after receiving the full input is not suitable for speech-to-speech interaction, where real-time, low-latency exchange is important. We address this by noting that humans naturally "think while listening." In this paper, we propose SHANKS, a general inference framework that enables SLMs to generate unspoken chain-of-thought reasoning while listening to the user input. SHANKS streams the input speech in fixed-duration chunks and, as soon as a chunk is received, generates unspoken reasoning based on all previous speech and reasoning, while the user continues speaking. SHANKS uses this unspoken reasoning to decide whether to interrupt the user and to make tool calls to complete the task. We demonstrate that SHANKS enhances real-time user-SLM interaction in two scenarios: (1) when the user is presenting a step-by-step solution to a math problem, SHANKS can listen, reason, and interrupt when the user makes a mistake, achieving 37.1% higher interruption accuracy than a baseline that interrupts without thinking; and (2) in a tool-augmented dialogue, SHANKS can complete 56.9% of the tool calls before the user finishes their turn. Overall, SHANKS moves toward models that keep thinking throughout the conversation, not only after a turn ends. Animated illustrations of Shanks can be found at https://d223302.github.io/SHANKS/
Community
SHANKS is a method to allow spoken language models (SLMs) to think while listening to the user input. Unlike LLMs that only start to think after the full input is received, our method allows the SLM to begin to reason about the user input as the user is speaking, enabling timely and well-founded interaction and reducing the response latency. Check out a brief introduction at the project page:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Chronological Thinking in Full-Duplex Spoken Dialogue Language Models (2025)
- Mini-Omni-Reasoner: Token-Level Thinking-in-Speaking in Large Speech Models (2025)
- Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech (2025)
- FLEXI: Benchmarking Full-duplex Human-LLM Speech Interaction (2025)
- KAME: Tandem Architecture for Enhancing Knowledge in Real-Time Speech-to-Speech Conversational AI (2025)
- Chain-of-Thought Reasoning in Streaming Full-Duplex End-to-End Spoken Dialogue Systems (2025)
- Stream RAG: Instant and Accurate Spoken Dialogue Systems with Streaming Tool Usage (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper