|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- mixture-of-experts |
|
- moe |
|
- expert-pruning |
|
- gpt-oss |
|
- openai |
|
- reasoning |
|
- instruction-following |
|
- specialized |
|
- efficient |
|
- transformer |
|
- causal-lm |
|
- text-generation |
|
- pytorch |
|
- pruned-model |
|
- domain-specific |
|
--- |
|
|
|
# Instruction Following GPT-OSS Model (15 Experts) |
|
|
|
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ |
|
|
|
<div align="center"> |
|
|
|
### 👥 Follow the Authors |
|
|
|
**Aman Priyanshu** |
|
[](https://www.linkedin.com/in/aman-priyanshu/) |
|
[](https://x.com/AmanPriyanshu6) |
|
[](https://amanpriyanshu.github.io/) |
|
|
|
**Supriti Vijay** |
|
[](https://www.linkedin.com/in/supriti-vijay/) |
|
[](https://x.com/SupritiVijay) |
|
[](https://supritivijay.github.io/) |
|
|
|
</div> |
|
|
|
## Introduction |
|
|
|
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 15 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. |
|
|
|
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. |
|
|
|
This pruning approach reduces the model size while attempting to preserve performance on the target domain. |
|
|
|
## Model Architecture & Statistics |
|
|
|
| Metric | Value | |
|
|--------|-------| |
|
| **Base Model** | openai/gpt-oss-20b | |
|
| **Architecture** | Mixture-of-Experts Transformer | |
|
| **Total Parameters** | ~10.8B (pruned from 21B) | |
|
| **Original Experts per Layer** | 32 | |
|
| **Pruned Experts per Layer** | 15 | |
|
| **Layers** | 24 | |
|
| **Top-k Routing** | 4 | |
|
| **Context Length** | 128K tokens | |
|
| **Attention Heads** | 64 (Query), 8 (Key-Value) | |
|
| **Residual Dimension** | 2880 | |
|
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) | |
|
| **Positional Encoding** | RoPE (Rotary Position Embedding) | |
|
| **Normalization** | RMSNorm | |
|
| **Precision** | BF16 | |
|
| **License** | Apache 2.0 | |
|
| **Specialization** | Instruction Following | |
|
|
|
## Pruning Methodology |
|
|
|
### What is Expert Pruning? |
|
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: |
|
|
|
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks |
|
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain |
|
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts |
|
|
|
### Our Approach |
|
- **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks |
|
- **Systematic Reduction**: Reduced from 32 to 15 experts per layer |
|
- **No Retraining**: Direct removal without additional training steps |
|
|
|
## Performance & Applications |
|
|
|
### Pruning Benefits |
|
- **Smaller Memory Footprint**: 46.9% of original expert parameters |
|
- **Reduced Computational Load**: Fewer routing decisions during inference |
|
- **Focused Capabilities**: Retains experts relevant to instruction following tasks |
|
|
|
### Use Cases |
|
- **Speculative Decoding**: Draft model for full GPT-OSS-20B |
|
- **Resource-Constrained Deployment**: Edge devices, mobile applications |
|
- **Research**: Study expert specialization in MoE models |
|
- **Fine-tuning**: Smaller base model for domain adaptation |
|
|
|
*Note: Performance may vary depending on how well the pruned experts match your specific use case.* |
|
|
|
## Motivation & Expert Selection |
|
|
|
This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. |
|
|
|
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: |
|
|
|
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) |
|
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law |
|
- **SORRY-Bench**: Safety evaluation across harmful content categories |
|
- **Tulu3**: Persona-driven instruction following with verifiable constraints |
|
- **Polyglot-or-Not**: Multilingual factual completion tasks |
|
|
|
By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 15 experts per layer. |
|
|
|
## Dataset & Analysis Foundation |
|
|
|
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: |
|
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** |
|
|
|
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. |
|
|
|
### Pruning Methodology |
|
Our approach involves: |
|
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks |
|
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains |
|
3. **Systematic Pruning**: Reduction from 32 to 15 experts while preserving router functionality |
|
4. **Quality Validation**: Testing to ensure maintained performance on target tasks |
|
|
|
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* |
|
|
|
## Usage |
|
|
|
### CPU Inference |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
# Load the specialized model on CPU |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts", |
|
torch_dtype=torch.bfloat16, |
|
device_map="cpu", |
|
trust_remote_code=True |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts") |
|
|
|
# Generate with the model |
|
messages = [ |
|
{"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
return_tensors="pt", |
|
return_dict=True, |
|
reasoning_effort="medium" |
|
) |
|
|
|
# Ensure inputs are on the same device as model |
|
inputs = {k: v.to(model.device) for k, v in inputs.items()} |
|
|
|
outputs = model.generate( |
|
**inputs, |
|
max_new_tokens=512, |
|
do_sample=True, |
|
temperature=0.1, |
|
top_p=0.9, |
|
pad_token_id=tokenizer.eos_token_id, |
|
eos_token_id=tokenizer.eos_token_id |
|
) |
|
|
|
# Decode only the generated part |
|
input_length = inputs['input_ids'].shape[1] |
|
response_tokens = outputs[0][input_length:] |
|
response = tokenizer.decode(response_tokens, skip_special_tokens=True) |
|
print(response) |
|
``` |
|
|
|
### Apple Silicon (MPS) Inference |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
# Check MPS availability and load model |
|
device = "mps" if torch.backends.mps.is_available() else "cpu" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
"AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts", |
|
torch_dtype=torch.float16, # Better MPS compatibility |
|
device_map=device, |
|
trust_remote_code=True, |
|
low_cpu_mem_usage=True |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts") |
|
|
|
# Generate with the model |
|
messages = [ |
|
{"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
return_tensors="pt", |
|
return_dict=True, |
|
reasoning_effort="medium" |
|
) |
|
|
|
# Move inputs to model device |
|
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} |
|
|
|
# Use torch.no_grad for MPS stability |
|
with torch.no_grad(): |
|
outputs = model.generate( |
|
**inputs, |
|
max_new_tokens=512, |
|
do_sample=True, |
|
temperature=0.1, |
|
top_p=0.9, |
|
pad_token_id=tokenizer.eos_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
use_cache=True |
|
) |
|
|
|
# Decode only the generated part |
|
input_length = inputs['input_ids'].shape[1] |
|
response_tokens = outputs[0][input_length:] |
|
response = tokenizer.decode(response_tokens, skip_special_tokens=True) |
|
print(response) |
|
``` |
|
|
|
### GPU Inference |
|
|
|
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: |
|
```python |
|
device_map="auto" # Will automatically use GPU if available |
|
torch_dtype=torch.bfloat16 # or torch.float16 |
|
``` |
|
|
|
## Model Examples |
|
|
|
The following examples demonstrate the model's performance on instruction following tasks: |
|
|
|
### Example 1 |
|
**Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. |
|
|
|
**Response:** |
|
``` |
|
<|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress |
|
|
|
Dear Professor [Last Name], |
|
|
|
I hope this message finds you well. I would like to request a brief meeting to discuss the current status of my research project and to seek your guidance on the next steps. |
|
|
|
Could we meet at one of the following times? |
|
- Tuesday, April 12 at 10:00 a.m. |
|
- Wednesday, April 13 at 2:00 p.m. |
|
- Thursday, April 14 at 11:30 a.m. |
|
|
|
If none of these slots work for you, I would be happy to adjust my schedule to accommodate your availability. |
|
|
|
Thank you for your time and consideration. I look forward to our conversation. |
|
|
|
Sincerely, |
|
|
|
[Your Full Name] |
|
[Program / Department] |
|
[University] |
|
[Phone] | [Email]<|return|> |
|
``` |
|
|
|
--- |
|
|
|
### Example 2 |
|
**Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. |
|
|
|
**Response:** |
|
``` |
|
<|channel|>analysis<|message|>Need 6 numbered steps.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients** – Get a bowl of eggs, a small amount of milk or cream, a handful of fresh or frozen vegetables, and a spoon or whisk. |
|
2. **Heat the pan** – Turn a medium‑spot skillet or frying pan on to a gentle, steady heat on a stove or hot‑plate. |
|
3. **Add the milk** – Pour the milk or cream into the pan, letting it spread evenly across the surface. |
|
4. **Whate the eggs** – Crack the eggs into the pan and whisk them together with the milk, creating a smooth, slightly thickened mixture. |
|
5. **Stheat the vegetables** – Sprinkle the vegetables over the mixture, letting them soften and mingle with the eggs. |
|
6. **Serve** – Once the mixture is lightly browned and the vegetables are just beginning to melt, lift the pan, scoop the eggs into a bowl, and serve hot.<|return|> |
|
``` |
|
|
|
--- |
|
|
|
### Example 3 |
|
**Question:** Summarize the water cycle in exactly 50 words, no more, no less. |
|
|
|
**Response:** |
|
``` |
|
<|channel|>analysis<|message|>Need 50 words exactly. Count. Let's craft: "Water originates in the atmosphere as precipitation, falls onto land, infiltrates soil, and moves through plants and organisms. It evaporates from surfaces, condensates into vapor, travels through the air, condensates again, and returns to the ground as rain. This continuous loop replenizes ecosystems, supports life, and sustens planetary balance." Count words. Let's count: Water(1) originates(2) in(3) the(4) atmosphere(5) as(6) precipitation,(7) falls(8) onto(9) land,(10) infiltrates(11) soil,(12) and(13) moves(14) through(15) plants(16) and(17) organisms.(18) It(19) evaporates(20) from(21) surfaces,(22) condensates(23) into(24) vapor,(25) travels(26) through(27) the(28) air,(29) condensates(30) again,(31) and(32) returns(33) to(34) the(35) ground(36) as(37) rain.(38) This(39) continuous(40) loop(41) replenizes(42) ecosystems,(43) supports(44) life,(45) and(46) sustens(47) planetary(48) balance.(49) Need one more word: "foreever" (50). Let's add "foreever" at end.<|end|><|start|>assistant<|channel|>final<|message|>Water originates in the atmosphere as precipitation, falls onto land, infiltrates soil, and moves through plants and organisms. It evaporates from surfaces, condensates into vapor, travels through the air, condensates again, and returns to the ground as rain. This continuous loop replenizes ecosystems, supports life, and sustens planetary balance foreever.<|return|> |
|
``` |
|
|
|
--- |
|
|
|
## Citation |
|
|
|
If you use this model in your research, please cite: |
|
|
|
```bibtex |
|
@misc{priyanshu2025gptoss, |
|
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, |
|
author={Priyanshu, Aman and Vijay, Supriti}, |
|
year={2025}, |
|
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, |
|
note={Interactive analysis tool for expert activation patterns in MoE architectures} |
|
} |
|
``` |
|
|
|
## References & Resources |
|
|
|
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) |
|
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) |
|
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) |
|
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) |
|
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss) |
|
|