One Small Step in Latent, One Giant Leap for Pixels: Fast Latent Upscale Adapter for Your Diffusion Models
Abstract
LUA is a lightweight module that performs super-resolution directly in the latent space of diffusion models, improving efficiency without compromising image quality.
Diffusion models struggle to scale beyond their training resolutions, as direct high-resolution sampling is slow and costly, while post-hoc image super-resolution (ISR) introduces artifacts and additional latency by operating after decoding. We present the Latent Upscaler Adapter (LUA), a lightweight module that performs super-resolution directly on the generator's latent code before the final VAE decoding step. LUA integrates as a drop-in component, requiring no modifications to the base model or additional diffusion stages, and enables high-resolution synthesis through a single feed-forward pass in latent space. A shared Swin-style backbone with scale-specific pixel-shuffle heads supports 2x and 4x factors and remains compatible with image-space SR baselines, achieving comparable perceptual quality with nearly 3x lower decoding and upscaling time (adding only +0.42 s for 1024 px generation from 512 px, compared to 1.87 s for pixel-space SR using the same SwinIR architecture). Furthermore, LUA shows strong generalization across the latent spaces of different VAEs, making it easy to deploy without retraining from scratch for each new decoder. Extensive experiments demonstrate that LUA closely matches the fidelity of native high-resolution generation while offering a practical and efficient path to scalable, high-fidelity image synthesis in modern diffusion pipelines.
Community
We show that high-res synthesis can be done by upscaling in latent space instead of pixels, keeping a single final decode. This preserves quality while cutting latency and works across scales.
Is there a comfyui node available?
Not yet, there isn’t a ComfyUI node available at the moment.
We’re planning to add supplementary material to the paper, release the code on GitHub and Hugging Face, and provide a ComfyUI node as soon as we can!
Thanks!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Asymmetric VAE for One-Step Video Super-Resolution Acceleration (2025)
- ScaleDiff: Higher-Resolution Image Synthesis via Efficient and Model-Agnostic Diffusion (2025)
- Aligning Visual Foundation Encoders to Tokenizers for Diffusion Models (2025)
- Vision Foundation Models Can Be Good Tokenizers for Latent Diffusion Models (2025)
- InfVSR: Breaking Length Limits of Generic Video Super-Resolution (2025)
- LucidFlux: Caption-Free Universal Image Restoration via a Large-Scale Diffusion Transformer (2025)
- FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hurry up! We need comfyui nodes!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper