StopTryharding's picture
Update README.md
7226c4a verified
|
raw
history blame
1.28 kB
metadata
license: apache-2.0
tags:
  - moe
  - merge
  - mergekit
  - lazymergekit
  - mlabonne/NeuralBeagle14-7B
  - mlabonne/NeuralDaredevil-7B
  - text-generation-inference
  - Text Generation

This is a repository of GGUF Quants for DareBeagel-2x7B

Original Model Available Here: https://huggingface.co/shadowml/DareBeagel-2x7B

Available Quants

  • Q6_K
  • Q5_K_M
  • Q5_K_S
  • Q4_K_M
  • Q4_K_S

More coming... slow upload speed.

Beyonder-2x7B-v2

Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: mlabonne/NeuralBeagle14-7B
gate_mode: random
experts:
  - source_model: mlabonne/NeuralBeagle14-7B
    positive_prompts: [""]
  - source_model: mlabonne/NeuralDaredevil-7B
    positive_prompts: [""]

💻 Usage

Load in Kobold.cpp or whatever.  I found Alpaca (and Alpaca-ish) prompts worked well.  Settings that worked good for me are:
Min P - 0.1
Dynamic Temperature Min 0 Max 3
Rep Pen 1.03
Rep Pen Range 1000