Papers
arxiv:2509.13642

LLM-I: LLMs are Naturally Interleaved Multimodal Creators

Published on Sep 17
· Submitted by zrguo on Sep 18
Authors:
,
,

Abstract

LLM-Interleaved (LLM-I) is a flexible framework that uses a central LLM to orchestrate a toolkit of specialized visual tools, achieving state-of-the-art performance in image-text generation through reinforcement learning and a novel scaling strategy.

AI-generated summary

We propose LLM-Interleaved (LLM-I), a flexible and dynamic framework that reframes interleaved image-text generation as a tool-use problem. LLM-I is designed to overcome the "one-tool" bottleneck of current unified models, which are limited to synthetic imagery and struggle with tasks requiring factual grounding or programmatic precision. Our framework empowers a central LLM or MLLM agent to intelligently orchestrate a diverse toolkit of specialized visual tools, including online image search, diffusion-based generation, code execution, and image editing. The agent is trained to select and apply these tools proficiently via a Reinforcement Learning (RL) framework that features a hybrid reward system combining rule-based logic with judgments from LLM and MLLM evaluators. Trained on a diverse new dataset using four different model backbones, LLM-I demonstrates state-of-the-art performance, outperforming existing methods by a large margin across four benchmarks. We also introduce a novel test-time scaling strategy that provides further performance gains. Project Page: https://github.com/ByteDance-BandAI/LLM-I.

Community

Paper author Paper submitter

LLM-I introduces a flexible framework where a Large Language Model (LLM) acts as an agentic planner, orchestrating a diverse toolkit of specialized visual creation tools for interleaved image-text generation. The system achieves state-of-the-art performance on multiple benchmarks and demonstrates 100.0% tool accuracy on its novel LLMI-Bench, effectively generating factually grounded and programmatically precise visual content.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.13642 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.13642 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.13642 in a Space README.md to link it from this page.

Collections including this paper 4