Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model:
|
4 |
+
- Qwen/Qwen3-30B-A3B
|
5 |
+
---
|
6 |
+
# Qwen3-16B-A3B
|
7 |
+
|
8 |
+
A man-made horror beyond your comprehension.
|
9 |
+
|
10 |
+
But no, seriously, this is my experiment to:
|
11 |
+
- measure the probability that any given expert will activate (over my personal set of fairly diverse calibration data), per layer
|
12 |
+
- prune 64/128 of the least used experts per layer (with reordered router and indexing per layer)
|
13 |
+
|
14 |
+
It can still write semi-coherently without any additional training or distillation done on top of it from the original 30b MoE.
|
15 |
+
The .txt files with the original measurements are provided in the repo along with the exported weights.
|
16 |
+
|
17 |
+
Custom testing to measure the experts was done on a hacked version of vllm, and then I made a bespoke script to selectively export the weights according to the measurements.
|