|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- Qwen/Qwen3-30B-A3B |
|
--- |
|
# Qwen3-16B-A3B |
|
|
|
A man-made horror beyond your comprehension. |
|
|
|
But no, seriously, this is my experiment to: |
|
- measure the probability that any given expert will activate (over my personal set of fairly diverse calibration data), per layer |
|
- prune 64/128 of the least used experts per layer (with reordered router and indexing per layer) |
|
|
|
It can still write semi-coherently without any additional training or distillation done on top of it from the original 30b MoE. |
|
The .txt files with the original measurements are provided in the repo along with the exported weights. |
|
|
|
Custom testing to measure the experts was done on a hacked version of vllm, and then I made a bespoke script to selectively export the weights according to the measurements. |