Papers
arxiv:2509.08753

Streaming Sequence-to-Sequence Learning with Delayed Streams Modeling

Published on Sep 10
Authors:
,
,
,
,
,
,
,

Abstract

Delayed Streams Modeling (DSM) uses a decoder-only language model to handle streaming, multimodal sequence-to-sequence tasks with state-of-the-art performance and latency.

AI-generated summary

We introduce Delayed Streams Modeling (DSM), a flexible formulation for streaming, multimodal sequence-to-sequence learning. Sequence-to-sequence generation is often cast in an offline manner, where the model consumes the complete input sequence before generating the first output timestep. Alternatively, streaming sequence-to-sequence rely on learning a policy for choosing when to advance on the input stream, or write to the output stream. DSM instead models already time-aligned streams with a decoder-only language model. By moving the alignment to a pre-processing step,and introducing appropriate delays between streams, DSM provides streaming inference of arbitrary output sequences, from any input combination, making it applicable to many sequence-to-sequence problems. In particular, given text and audio streams, automatic speech recognition (ASR) corresponds to the text stream being delayed, while the opposite gives a text-to-speech (TTS) model. We perform extensive experiments for these two major sequence-to-sequence tasks, showing that DSM provides state-of-the-art performance and latency while supporting arbitrary long sequences, being even competitive with offline baselines. Code, samples and demos are available at https://github.com/kyutai-labs/delayed-streams-modeling

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.08753 in a dataset README.md to link it from this page.

Spaces citing this paper 24

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.