-
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Paper • 2411.10958 • Published • 56 -
SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference
Paper • 2502.18137 • Published • 58 -
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training
Paper • 2505.11594 • Published • 76 -
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
Paper • 2410.02367 • Published • 51
Jintao Zhang
jt-zhang
AI & ML interests
Efficient ML
Recent Activity
new activity
3 days ago
huggingface/HuggingDiscussions:[FEEDBACK] Daily Papers
new activity
about 1 month ago
jt-zhang/SageAttention3:Any improvement on Ada Lovelace (RTX 4xxx) ?
updated
a collection
about 1 month ago
efficient ml