OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows
Abstract
OneFlow, a non-autoregressive multimodal model, achieves superior performance in text-image generation and understanding tasks with reduced computational cost compared to autoregressive and diffusion-based models.
We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.
Community
Project page: johnlnguyen.com/oneflow/
Thanks for sharing our work!
super cool work!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning (2025)
- Dream 7B: Diffusion Large Language Models (2025)
- Interleaving Reasoning for Better Text-to-Image Generation (2025)
- CoDA: Coding LM via Diffusion Adaptation (2025)
- JEPA-T: Joint-Embedding Predictive Architecture with Text Fusion for Image Generation (2025)
- REAR: Rethinking Visual Autoregressive Models via Generator-Tokenizer Consistency Regularization (2025)
- X-Streamer: Unified Human World Modeling with Audiovisual Interaction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper