🎬 LongLive: Real-time Interactive Long Video Generation
💡 TLDR: Turn interactive prompts into long videos—instantly, as you type!
LongLive: Real-time Interactive Long Video Generation [Paper]
Shuai Yang, Wei Huang, Ruihang Chu, Yicheng Xiao, Yuyang Zhao, Xianbang Wang, Muyang Li, Enze Xie, Yingcong Chen, Yao Lu, Song Han, Yukang Chen
We present LongLive, a frame-level autoregressive (AR) framework for real-time and interactive long video generation. Long video generation presents challenges in both efficiency and quality. Diffusion and Diffusion-Forcing models can produce high-quality videos but suffer from low efficiency due to bidirectional attention. Causal attention AR models support KV caching for faster inference but often degrade in quality on long videos due to memory challenges during long-video training. In addition, beyond static prompt-based generation, interactive capabilities, such as streaming prompt inputs, are critical for dynamic content creation, enabling users to guide narratives in real time. This interactive requirement significantly increases the complexity, especially in ensuring visual consistency and semantic coherence during prompt transitions. To address these challenges, LongLive adopts a causal, frame-level AR design that integrates a KV-recache mechanism that refreshes cached states with the new prompt for smooth, adherent switches; streaming long tuning to enable long video training and to align training and inference (train-long–test-long); and short window attention paired with a frame-level attention sink, preserving long-range consistency while enabling faster generation. With these key designs, LongLive fine-tunes a 1.3B-parameter short-clip model to minute-long generation in just 32 GPU-days. At inference, LongLive sustains 20.7 FPS on a single NVIDIA H100, achieves strong performance on VBench in both short- and long-video settings. LongLive supports up to 240-second videos on a single H100 GPU. With FP8 quantization, LongLive boosts inference to 24.8 FPS with marginal quality loss.
News
- [2025.9.25] We release Paper, this GitHub repo LongLive with all training and inference code, the model weight LongLive-1.3B, and demo page Website.
Highlights
- Long Video Gen: LongLive supports up to 240s video generation, with visual consistency.
- Real-time Inference: LongLive supports 20.7 FPS generation speed on a single H100 GPU, and 24.8 FPS with FP8 quantization with marginal quality loss.
- Efficient Fine-tuning: LongLive extends a short-clip model to minute-long generation in 32 H100 GPU-days.
Introduction
LongLive accepts sequential user prompts and generates corresponding videos in real time, enabling user-guided long video generation.
The framework of LongLive. (Left) Frame Sink + Short window attention. (Right) KV-recache.
The streaming long tuning pipeline. Our approach trains on long sequences by reusing the historical KV cache each iteration to generate the next 5s clip, then supervising it with the teacher.
The effectiveness of Frame Sink.
The effectiveness of KV re-cache. Consistent transitions with new-prompt compliance.
Interactive 60s videos with 6 prompts. See our demo Website for video examples.
Installation
Requirements
We tested this repo on the following setup:
- Nvidia GPU with at least 40 GB memory (A100, and H100 are tested).
- Linux operating system.
- 64 GB RAM.
Other hardware setup could also work but hasn't been tested.
Environment
Create a conda environment and install dependencies:
git clone https://github.com/NVlabs/LongLive
cd LongLive
conda create -n longlive python=3.10 -y
conda activate longlive
conda install nvidia/label/cuda-12.4.1::cuda
conda install -c nvidia/label/cuda-12.4.1 cudatoolkit
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
pip install flash-attn==2.7.4.post1 --no-build-isolation
Inference
Download checkpoints
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir wan_models/Wan2.1-T2V-1.3B
huggingface-cli download Efficient-Large-Model/LongLive --local-dir longlive_models
Single Prompt Video Generation
bash inference.sh
Interactive Long Video Generation
bash interactive_inference.sh
How to contribute
- Make sure to have git installed.
- Create your own fork of the project.
- Clone the repository on your local machine, using git clone and pasting the url of this project.
- Read both the
Requirements
andInstallation and Quick Guide
sections below. - Commit and push your changes.
- Make a pull request when finished modifying the project.
Citation
Please consider to cite our paper and this framework, if they are helpful in your research.
@article{yang2025longlive,
title={LongLive: Real-time Interactive Long Video Generation},
author={Shuai Yang and Wei Huang and Ruihang Chu and Yicheng Xiao and Yuyang Zhao and Xianbang Wang and Muyang Li and Enze Xie and Yingcong Chen and Yao Lu and Song Hanand Yukang Chen},
year={2025},
eprint={2509.22622},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
License
- LongLive-1.3B model weight is under CC-BY-NC 4.0 license.
Acknowledgement
- Self-Forcing: the codebase and algorithm we built upon. Thanks for their wonderful work.
- Wan: the base model we built upon. Thanks for their wonderful work.
Model tree for Efficient-Large-Model/LongLive-1.3B
Base model
Wan-AI/Wan2.1-T2V-1.3B