Lewis Tunstall's picture

Lewis Tunstall PRO

lewtun

AI & ML interests

LLMs, LLMs, LLMs

Recent Activity

replied to badaoui's post 15 minutes ago
Is there a "one-size-fits-all" recipe for quantizing Large Language Models? 🤔 As part of my ongoing work in mixed-precision quantization, I've been exploring this question by measuring layer-by-layer sensitivity. The goal is to see if we can find universal rules for which layers can be quantized aggressively without impacting performance.The results are fascinating and reveal two key insights: 1️⃣ Sensitivity profiles are like architectural "fingerprints." Models from the same family share strikingly similar sensitivity patterns. As you can see in the charts below for the Gemma and SmolLM families, the ranking and relative sensitivity of the layers remain remarkably consistent. This suggests that the underlying architecture is a primary driver of a model's quantization behavior. 2️⃣ A "universal" mixed-precision quantization strategy is challenging. While models within a family are similar, these "fingerprints" change dramatically when comparing different architectures like LLaMA, Qwen, and StableLM. This highlights the difficulty in creating a generalized mixed-precision configuration that works optimally across all model families. However, there is one near-universal truth we uncovered: the mlp.down_proj layer consistently emerges as one of the most sensitive components across all models studied. This finding strongly resonates with the work in "The Super Weight in Large Language Models" (by Mengxia Yu et al.). The paper identifies that functionally critical parameters, or "super weights," are concentrated in these down_proj layers. Our empirical results provide clear validation for this theory, showing these layers are highly intolerant to precision loss. In short, while every architecture has a unique sensitivity profile, a fingerprint shaped not only by its core design but also by its specific training dataset and optimization approach, some components remain universally critical! What are your thoughts?
reacted to badaoui's post with 🚀 17 minutes ago
Is there a "one-size-fits-all" recipe for quantizing Large Language Models? 🤔 As part of my ongoing work in mixed-precision quantization, I've been exploring this question by measuring layer-by-layer sensitivity. The goal is to see if we can find universal rules for which layers can be quantized aggressively without impacting performance.The results are fascinating and reveal two key insights: 1️⃣ Sensitivity profiles are like architectural "fingerprints." Models from the same family share strikingly similar sensitivity patterns. As you can see in the charts below for the Gemma and SmolLM families, the ranking and relative sensitivity of the layers remain remarkably consistent. This suggests that the underlying architecture is a primary driver of a model's quantization behavior. 2️⃣ A "universal" mixed-precision quantization strategy is challenging. While models within a family are similar, these "fingerprints" change dramatically when comparing different architectures like LLaMA, Qwen, and StableLM. This highlights the difficulty in creating a generalized mixed-precision configuration that works optimally across all model families. However, there is one near-universal truth we uncovered: the mlp.down_proj layer consistently emerges as one of the most sensitive components across all models studied. This finding strongly resonates with the work in "The Super Weight in Large Language Models" (by Mengxia Yu et al.). The paper identifies that functionally critical parameters, or "super weights," are concentrated in these down_proj layers. Our empirical results provide clear validation for this theory, showing these layers are highly intolerant to precision loss. In short, while every architecture has a unique sensitivity profile, a fingerprint shaped not only by its core design but also by its specific training dataset and optimization approach, some components remain universally critical! What are your thoughts?
liked a Space 1 day ago
timqian/like-history
View all activity

Organizations

Hugging Face's profile picture AutoNLP's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Ought's profile picture Hugging Face Internal Testing Organization's profile picture Testing Benchmarks on the Hub's profile picture The LLM Course's profile picture NLP en ES's profile picture GEM benchmark's profile picture SetFit's profile picture Benchmarks Hosting's profile picture GEM benchmark submissions's profile picture ALPS test's profile picture Evaluation datasets's profile picture Deep Learning for Particle Physicists's profile picture fast.ai community's profile picture DreamBooth Hackathon's profile picture trl internal testing's profile picture SomosNLP's profile picture HF Course Demos's profile picture Marsyas  (Music Analysis, Retrieval and Synthesis for Audio Signals)'s profile picture ONNXConfig for all's profile picture How to teach Hugging Face?'s profile picture Jet Universe's profile picture Evaluation on the Hub's profile picture The ML Landscape of Top Taggers's profile picture HuggingFaceM4's profile picture HF Canonical Model Maintainers's profile picture TRL's profile picture BigCode's profile picture Hugging Face H4's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture BigCode Data's profile picture Reading Group's profile picture Hugging Face H4 Community's profile picture Hugging Face Smol Models Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture EPFL LLM Team's profile picture H4 Alignment Handbook's profile picture ZeroGPU Explorers's profile picture h4-argilla-collab's profile picture Project-Numina's profile picture ORPO Explorers's profile picture Kato's profile picture Distillation Hugs's profile picture HuggingFaceFW-Dev's profile picture Hugging Face Discord Community's profile picture Data Agents's profile picture nltpt's profile picture IOPO Experiments's profile picture Hugging Face FineVideo's profile picture Reliable Agents's profile picture Hugging Face Science's profile picture HF CMU Collab's profile picture wut?'s profile picture kernels-community's profile picture Open R1's profile picture todo's profile picture OlympicCoder's profile picture yofo's profile picture LightEval Internal Testing's profile picture yofo-wildflower's profile picture