|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- mergekit |
|
--- |
|
|
|
# Beyonder-4x7b |
|
|
|
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: |
|
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) |
|
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) |
|
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) |
|
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
base_model: openchat/openchat-3.5-1210 |
|
gate_mode: hidden |
|
experts: |
|
- source_model: openchat/openchat-3.5-1210 |
|
positive_prompts: |
|
- "chat" |
|
- "assistant" |
|
- "tell me" |
|
- "explain" |
|
negative_prompts: |
|
- "storywriting" |
|
- "mathematics" |
|
- "reasoning" |
|
- "code" |
|
- "programming" |
|
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B |
|
positive_prompts: |
|
- "code" |
|
- "python" |
|
- "javascript" |
|
- "programming" |
|
- "algorithm" |
|
negative_prompts: |
|
- "chat" |
|
- "assistant" |
|
- "storywriting" |
|
- "mathematics" |
|
- "reasoning" |
|
- source_model: maywell/PiVoT-0.1-Starling-LM-RP |
|
positive_prompts: |
|
- "storywriting" |
|
- "write" |
|
- "scene" |
|
- "story" |
|
- "character" |
|
negative_prompts: |
|
- "chat" |
|
- "assistant" |
|
- "code" |
|
- "programming" |
|
- "mathematics" |
|
- "reasoning" |
|
- source_model: WizardLM/WizardMath-7B-V1.1 |
|
positive_prompts: |
|
- "reason" |
|
- "math" |
|
- "mathematics" |
|
- "solve" |
|
- "count" |
|
negative_prompts: |
|
- "chat" |
|
- "assistant" |
|
- "code" |
|
- "programming" |
|
- "storywriting" |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers bitsandbytes accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/Beyonder-4x7b" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
|
) |
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
Output: |
|
|
|
``` |
|
A Mixture of Experts (MoE) is a neural network architecture that combines the strengths of multiple expert networks to make predictions. It leverages the idea of ensemble learning, where multiple models work together to improve performance. In each MoE, a gating network is used to select the most relevant expert for the input. The final output is a weighted combination of the expert outputs, determined by the gating network's predictions. |
|
``` |