Upload folder using huggingface_hub
Browse files- README.md +19 -61
- measurement.json +0 -0
README.md
CHANGED
@@ -1,79 +1,37 @@
|
|
1 |
---
|
2 |
base_model:
|
3 |
-
-
|
|
|
|
|
|
|
|
|
4 |
library_name: transformers
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
|
9 |
---
|
10 |
-
#
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
Inspired to learn how to merge by the Nevoria series from [SteelSkull](https://huggingface.co/Steelskull).
|
15 |
-
|
16 |
-
This model is the final result of the Genetic Lemonade series.
|
17 |
-
|
18 |
-
Designed for RP, this model is mostly uncensored and focused around striking a balance between writing style, creativity and intelligence.
|
19 |
-
|
20 |
-
When compared to the previous Genetic Lemonade Unleashed:
|
21 |
-
- Higher instruct following
|
22 |
-
- Increased coherence
|
23 |
-
- *potentially* slightly more uncensored
|
24 |
-
|
25 |
-
## SillyTavern Settings
|
26 |
-
|
27 |
-
[Llam@ception](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception/tree/main/Llam%40ception) recommended for sane defaults if unsure, import them to SillyTavern and they're plug n play.
|
28 |
-
|
29 |
-
### Sampler Settings
|
30 |
-
- Temp: 0.9-1.0
|
31 |
-
- MinP: 0.03-0.05
|
32 |
-
- Dry: 0.8, 1.75, 4
|
33 |
-
|
34 |
-
Temperature last, neutralize other samplers. This model natively strikes a balance of creativity & intelligence.
|
35 |
-
|
36 |
-
### Instruct
|
37 |
-
|
38 |
-
Llama-3-Instruct-Names but you will need to uncheck "System same as user".
|
39 |
-
|
40 |
-
## Quants
|
41 |
-
|
42 |
-
### GGUF
|
43 |
-
|
44 |
-
- [Static quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Final-70B-GGUF)
|
45 |
-
- [iMatrix quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Final-70B-i1-GGUF)
|
46 |
-
|
47 |
-
### EXL2
|
48 |
-
|
49 |
-
- [4.5bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B-4.5bpw-h6-exl2)
|
50 |
-
- [6bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B-6bpw-h8-exl2)
|
51 |
|
52 |
## Merge Details
|
53 |
### Merge Method
|
54 |
|
55 |
-
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method.
|
56 |
|
57 |
-
|
58 |
-
The second merge aims to impart specific RP / creative writing knowledge, again focusing on trying to find high performing models that use or likely use different datasets.
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
merge_method: sce
|
69 |
-
base_model: meta-llama/Llama-3.3-70B-Instruct
|
70 |
-
out_dtype: bfloat16
|
71 |
-
dype: float32
|
72 |
-
tokenizer:
|
73 |
-
source: base
|
74 |
-
```
|
75 |
|
76 |
-
|
77 |
|
78 |
```yaml
|
79 |
models:
|
@@ -89,4 +47,4 @@ out_dtype: bfloat16
|
|
89 |
dype: float32
|
90 |
tokenizer:
|
91 |
source: union
|
92 |
-
```
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
+
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
|
4 |
+
- Sao10K/L3.3-70B-Euryale-v2.3
|
5 |
+
- crestf411/L3.1-nemotron-sunfall-v0.7.0
|
6 |
+
- SicariusSicariiStuff/Negative_LLAMA_70B
|
7 |
+
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
|
8 |
library_name: transformers
|
9 |
tags:
|
10 |
- mergekit
|
11 |
- merge
|
12 |
|
13 |
---
|
14 |
+
# GeneticLemonadev2-c
|
15 |
|
16 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Merge Details
|
19 |
### Merge Method
|
20 |
|
21 |
+
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using ./Base_6_v2 as a base.
|
22 |
|
23 |
+
### Models Merged
|
|
|
24 |
|
25 |
+
The following models were included in the merge:
|
26 |
+
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
|
27 |
+
* [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3)
|
28 |
+
* [crestf411/L3.1-nemotron-sunfall-v0.7.0](https://huggingface.co/crestf411/L3.1-nemotron-sunfall-v0.7.0)
|
29 |
+
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
|
30 |
+
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
|
31 |
+
|
32 |
+
### Configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
The following YAML configuration was used to produce this model:
|
35 |
|
36 |
```yaml
|
37 |
models:
|
|
|
47 |
dype: float32
|
48 |
tokenizer:
|
49 |
source: union
|
50 |
+
```
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|