Hyperspherical Latents Improve Continuous-Token Autoregressive Generation
Abstract
SphereAR, an autoregressive model with hyperspherical constraints, achieves state-of-the-art performance in image generation, surpassing diffusion and masked-generation models at similar parameter scales.
Autoregressive (AR) models are promising for image generation, yet continuous-token AR variants often trail latent diffusion and masked-generation models. The core issue is heterogeneous variance in VAE latents, which is amplified during AR decoding, especially under classifier-free guidance (CFG), and can cause variance collapse. We propose SphereAR to address this issue. Its core design is to constrain all AR inputs and outputs -- including after CFG -- to lie on a fixed-radius hypersphere (constant ell_2 norm), leveraging hyperspherical VAEs. Our theoretical analysis shows that hyperspherical constraint removes the scale component (the primary cause of variance collapse), thereby stabilizing AR decoding. Empirically, on ImageNet generation, SphereAR-H (943M) sets a new state of the art for AR models, achieving FID 1.34. Even at smaller scales, SphereAR-L (479M) reaches FID 1.54 and SphereAR-B (208M) reaches 1.92, matching or surpassing much larger baselines such as MAR-H (943M, 1.55) and VAR-d30 (2B, 1.92). To our knowledge, this is the first time a pure next-token AR image generator with raster order surpasses diffusion and masked-generation models at comparable parameter scales.
Community
We stabilize continuous-token AR image generation with one idea: hyperspherical latents.
Normalize every token (even after CFG) to a fixed radius → scale-invariant AR.
FID 1.36 (943M) / 1.54 (479M) / 1.92 (208M). Pure next-token, raster order.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ARSS: Taming Decoder-only Autoregressive Visual Generation for View Synthesis From Single View (2025)
- Scale-Wise VAR is Secretly Discrete Diffusion (2025)
- NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale (2025)
- Aligning Visual Foundation Encoders to Tokenizers for Diffusion Models (2025)
- CLEAR: Continuous Latent Autoregressive Modeling for High-quality and Low-latency Speech Synthesis (2025)
- A Survey on Diffusion Language Models (2025)
- AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper