FractalSoup-L3-8b : Forbidden Soup
Recommended Settings
Temperature - 1.12-1.22
Min-P - 0.075
Top-K - 50
Repetition Penalty - 1.1
Template : Llama-3-instruct
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Sao10K/L3-8B-Stheno-v3.2 as a base.
Models Merged
The following models were included in the merge:
- Sao10K/L3-8B-Stheno-v3.2
- SentientAGI/Dobby-Mini-Leashed-Llama-3.1-8B
- Sao10K/L3-8B-Niitama-v1
- SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
- NousResearch/Hermes-3-Llama-3.1-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B # generalist, unhinged
parameters:
weight: 0.20
density: 0.5
- model: NousResearch/Hermes-3-Llama-3.1-8B # generalist, intelligent
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/L3-8B-Stheno-v3.2 # a good base
parameters:
weight: 0.20
density: 0.5
- model: SentientAGI/Dobby-Mini-Leashed-Llama-3.1-8B # generalist, leashed
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/L3-8B-Niitama-v1 # enhanced creativity
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support