Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,4 @@ This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skyl
|
|
14 |
|
15 |
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
|
16 |
|
17 |
-
For more information see the original repository.
|
|
|
14 |
|
15 |
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
|
16 |
|
17 |
+
For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE).
|