SelfCP: Compressing Long Prompt to 1/12 Using the Frozen Large Language Model Itself Paper • 2405.17052 • Published May 27, 2024 • 2
Learning-Order Autoregressive Models with Application to Molecular Graph Generation Paper • 2503.05979 • Published Mar 7 • 1
Is the Reversal Curse a Binding Problem? Uncovering Limitations of Transformers from a Basic Generalization Failure Paper • 2504.01928 • Published 14 days ago • 1
EAGLE-3: Scaling up Inference Acceleration of Large Language Models via Training-Time Test Paper • 2503.01840 • Published Mar 3 • 5
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention Paper • 2504.06261 • Published 8 days ago • 98
OmniSVG: A Unified Scalable Vector Graphics Generation Model Paper • 2504.06263 • Published 8 days ago • 141
Value Residual Learning For Alleviating Attention Concentration In Transformers Paper • 2410.17897 • Published Oct 23, 2024 • 9
Flex Attention: A Programming Model for Generating Optimized Attention Kernels Paper • 2412.05496 • Published Dec 7, 2024 • 1
Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents Paper • 2504.00906 • Published 15 days ago • 20
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning Paper • 2504.00254 • Published 15 days ago • 1
Representation & Optimization Collection Understanding about representation sheds light on optimization • 12 items • Updated about 18 hours ago • 1
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers Paper • 2403.10476 • Published Mar 15, 2024 • 1
Layer by Layer: Uncovering Hidden Representations in Language Models Paper • 2502.02013 • Published Feb 4 • 1
CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners Paper • 2503.16356 • Published 27 days ago • 15
I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders Paper • 2503.18878 • Published 23 days ago • 114