lewtun
·
AI & ML interests
LLMs, LLMs, LLMs
Recent Activity
replied to
badaoui's
post
15 minutes ago
Is there a "one-size-fits-all" recipe for quantizing Large Language Models? 🤔
As part of my ongoing work in mixed-precision quantization, I've been exploring this question by measuring layer-by-layer sensitivity. The goal is to see if we can find universal rules for which layers can be quantized aggressively without impacting performance.The results are fascinating and reveal two key insights:
1️⃣ Sensitivity profiles are like architectural "fingerprints." Models from the same family share strikingly similar sensitivity patterns. As you can see in the charts below for the Gemma and SmolLM families, the ranking and relative sensitivity of the layers remain remarkably consistent. This suggests that the underlying architecture is a primary driver of a model's quantization behavior.
2️⃣ A "universal" mixed-precision quantization strategy is challenging. While models within a family are similar, these "fingerprints" change dramatically when comparing different architectures like LLaMA, Qwen, and StableLM. This highlights the difficulty in creating a generalized mixed-precision configuration that works optimally across all model families.
However, there is one near-universal truth we uncovered: the mlp.down_proj layer consistently emerges as one of the most sensitive components across all models studied.
This finding strongly resonates with the work in "The Super Weight in Large Language Models" (by Mengxia Yu et al.). The paper identifies that functionally critical parameters, or "super weights," are concentrated in these down_proj layers. Our empirical results provide clear validation for this theory, showing these layers are highly intolerant to precision loss.
In short, while every architecture has a unique sensitivity profile, a fingerprint shaped not only by its core design but also by its specific training dataset and optimization approach, some components remain universally critical!
What are your thoughts?
reacted
to
badaoui's
post
with 🚀
17 minutes ago
Is there a "one-size-fits-all" recipe for quantizing Large Language Models? 🤔
As part of my ongoing work in mixed-precision quantization, I've been exploring this question by measuring layer-by-layer sensitivity. The goal is to see if we can find universal rules for which layers can be quantized aggressively without impacting performance.The results are fascinating and reveal two key insights:
1️⃣ Sensitivity profiles are like architectural "fingerprints." Models from the same family share strikingly similar sensitivity patterns. As you can see in the charts below for the Gemma and SmolLM families, the ranking and relative sensitivity of the layers remain remarkably consistent. This suggests that the underlying architecture is a primary driver of a model's quantization behavior.
2️⃣ A "universal" mixed-precision quantization strategy is challenging. While models within a family are similar, these "fingerprints" change dramatically when comparing different architectures like LLaMA, Qwen, and StableLM. This highlights the difficulty in creating a generalized mixed-precision configuration that works optimally across all model families.
However, there is one near-universal truth we uncovered: the mlp.down_proj layer consistently emerges as one of the most sensitive components across all models studied.
This finding strongly resonates with the work in "The Super Weight in Large Language Models" (by Mengxia Yu et al.). The paper identifies that functionally critical parameters, or "super weights," are concentrated in these down_proj layers. Our empirical results provide clear validation for this theory, showing these layers are highly intolerant to precision loss.
In short, while every architecture has a unique sensitivity profile, a fingerprint shaped not only by its core design but also by its specific training dataset and optimization approach, some components remain universally critical!
What are your thoughts?
View all activity
Organizations
lewtun/gemma-7b-dpo-full-mix1-beta-0.2
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-dpo-full-mix1-beta-0.1
Text Generation
•
9B
•
Updated
•
4
lewtun/gemma-7b-dpo-full-ultrafeedback-v0
Text Generation
•
Updated
•
2
lewtun/gemma-7b-dpo-full-mix-beta-0.1
Updated
lewtun/gemma-7b-dpo-full-orca-v0
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-sft-full-deita-10k-v0
Text Generation
•
9B
•
Updated
•
3
lewtun/gemma-7b-sft-full-ultrachat-v0
Text Generation
•
9B
•
Updated
•
4
•
1
lewtun/gemma-7b-sft-full-longest-1k-v1
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-sft-full-longest-1k-v0
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-sft-full-dolly-v3
Text Generation
•
9B
•
Updated
•
4
lewtun/gemma-7b-sft-full-dolly-v2
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-sft-full-dolly-v1
Text Generation
•
9B
•
Updated
•
2
lewtun/gemma-7b-sft-full-dolly-v0
Text Generation
•
9B
•
Updated
•
4
lewtun/dummy-model
Text Generation
•
0.5B
•
Updated
•
4
lewtun/zephyr-7b-dpo-qlora-fix
lewtun/zephyr-7b-dpo-qlora-8e0975a
lewtun/zephyr-7b-dpo-qlora
lewtun/handbook-sft-qlora-test
lewtun/handbook-sft-test
Text Generation
•
7B
•
Updated
•
2
lewtun/zephyr-7b-dpo-full
Text Generation
•
7B
•
Updated
•
5
lewtun/zephyr-7b-sft-qlora
lewtun/kato-dummy
Text Classification
•
0.4B
•
Updated
•
2
lewtun/test-upload
Updated
lewtun/zephyr-7b-sft
Text Generation
•
7B
•
Updated
•
5
lewtun/zephyr-7b-dpo
Text Generation
•
7B
•
Updated
•
5
lewtun/mistral-7b-sft-ultrachat-arithmo-25
Text Generation
•
Updated
•
2
lewtun/mistral-7b-sft-ultrachat-arithmo-50
Text Generation
•
Updated
•
5
•
1
lewtun/mistral-7b-sft-ultrachat-arithmo-full
Text Generation
•
Updated
•
189
•
1
lewtun/test-tgi-main
Text Generation
•
Updated
•
13
lewtun/test-tgi-branch
Updated