Training Transformer Models by Wavelet Losses Improves Quantitative and Visual Performance in Single Image Super-Resolution Paper • 2404.11273 • Published Apr 17, 2024 • 1
Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution Paper • 2307.06304 • Published Jul 12, 2023 • 36
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator Paper • 2510.10693 • Published Oct 12, 2025 • 1
Neural Garbage Collection: Learning to Forget while Learning to Reason Paper • 2604.18002 • Published 15 days ago • 2
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems Paper • 2604.14228 • Published 21 days ago • 25
Maximal Brain Damage Without Data or Optimization: Disrupting Neural Networks via Sign-Bit Flips Paper • 2502.07408 • Published 19 days ago • 59
CoD-Lite: Real-Time Diffusion-Based Generative Image Compression Paper • 2604.12525 • Published 21 days ago • 1
One View Is Enough! Monocular Training for In-the-Wild Novel View Generation Paper • 2603.23488 • Published Mar 24 • 5
Domain-Specific Latent Representations Improve the Fidelity of Diffusion-Based Medical Image Super-Resolution Paper • 2604.12152 • Published 21 days ago • 3
Three things everyone should know about Vision Transformers Paper • 2203.09795 • Published Mar 18, 2022 • 1
Rethinking Vision Transformer Depth via Structural Reparameterization Paper • 2511.19718 • Published Nov 24, 2025 • 1