StopTryharding commited on
Commit
49c9d38
1 Parent(s): b4b0ef9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - merge
6
+ - mergekit
7
+ - lazymergekit
8
+ - mlabonne/NeuralBeagle14-7B
9
+ - mlabonne/NeuralDaredevil-7B
10
  ---
11
+
12
+ ---
13
+ **This is a repository of GGUF Quants for DareBeagel-2x7B**
14
+ ---
15
+
16
+ **Available Quants**
17
+
18
+ * F16
19
+ * Q8_0
20
+ * Q6_K
21
+ * Q5_0
22
+ * Q5_K_M
23
+ * Q5_K_S
24
+ * Q4_0
25
+ * Q4_K_M
26
+ * Q4_K_S
27
+ * Q3_K_M
28
+ * Q3_K_S
29
+ * Q2_K
30
+
31
+ # Beyonder-2x7B-v2
32
+
33
+ Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
34
+ * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
35
+ * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
36
+
37
+ ## 🧩 Configuration
38
+
39
+ ```yaml
40
+ base_model: mlabonne/NeuralBeagle14-7B
41
+ gate_mode: random
42
+ experts:
43
+ - source_model: mlabonne/NeuralBeagle14-7B
44
+ positive_prompts: [""]
45
+ - source_model: mlabonne/NeuralDaredevil-7B
46
+ positive_prompts: [""]
47
+ ```
48
+
49
+ ## 💻 Usage
50
+
51
+ ```python
52
+ !pip install -qU transformers bitsandbytes accelerate
53
+
54
+ from transformers import AutoTokenizer
55
+ import transformers
56
+ import torch
57
+
58
+ model = "shadowml/Beyonder-2x7B-v2"
59
+
60
+ tokenizer = AutoTokenizer.from_pretrained(model)
61
+ pipeline = transformers.pipeline(
62
+ "text-generation",
63
+ model=model,
64
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
65
+ )
66
+
67
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
68
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
69
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
70
+ print(outputs[0]["generated_text"])
71
+ ```