JackCloudman
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- JackCloudman/QwQ-56B-Ghost
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- mergekit
|
7 |
+
- merge
|
8 |
+
- uncensored
|
9 |
+
- abliterated
|
10 |
+
- chat
|
11 |
+
license: apache-2.0
|
12 |
+
language:
|
13 |
+
- en
|
14 |
+
pipeline_tag: text-generation
|
15 |
+
---
|
16 |
+
# JackCloudman/QwQ-56B-Ghost-GGUF
|
17 |
+
|
18 |
+
**QwQ-56B-Ghost** is an extended model derived from Qwen-32B-Preview-Jackterated using the _merging passthrough_ technique inspired in models like [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b). The result is a **~56B parameter model**
|
19 |
+
|
20 |
+
## Merge Details
|
21 |
+
### Merge Method
|
22 |
+
|
23 |
+
This model was merged using the passthrough merge method.
|
24 |
+
|
25 |
+
### Models Merged
|
26 |
+
|
27 |
+
The following models were included in the merge:
|
28 |
+
* JackCloudman/QwQ-32B-Preview-jackterated
|
29 |
+
|
30 |
+
### Configuration
|
31 |
+
|
32 |
+
The following YAML configuration was used to produce this model:
|
33 |
+
|
34 |
+
```yaml
|
35 |
+
dtype: float16
|
36 |
+
merge_method: passthrough
|
37 |
+
slices:
|
38 |
+
- sources:
|
39 |
+
- layer_range: [0, 16]
|
40 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
41 |
+
- sources:
|
42 |
+
- layer_range: [8, 24]
|
43 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
44 |
+
- sources:
|
45 |
+
- layer_range: [16, 32]
|
46 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
47 |
+
- sources:
|
48 |
+
- layer_range: [24, 40]
|
49 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
50 |
+
- sources:
|
51 |
+
- layer_range: [32, 48]
|
52 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
53 |
+
- sources:
|
54 |
+
- layer_range: [40, 56]
|
55 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
56 |
+
- sources:
|
57 |
+
- layer_range: [48, 64]
|
58 |
+
model: JackCloudman/QwQ-32B-Preview-jackterated
|
59 |
+
|
60 |
+
|
61 |
+
```
|
62 |
+
|
63 |
+
|
64 |
+
## Credits
|
65 |
+
- Qwen and QwQ-32B-Preview
|
66 |
+
- @FailSpy [notebook](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) with abliterate technique
|
67 |
+
- [Mergekit](https://github.com/cg123/mergekit)
|
68 |
+
- [Archive](https://youtu.be/FzmaF5p96pc?si=RqMjDgEh5J_090Js&t=2015)
|
69 |
+
- You :D
|