Update README.md
Browse files
README.md
CHANGED
@@ -4,17 +4,46 @@ tags:
|
|
4 |
- merge
|
5 |
- mergekit
|
6 |
- lazymergekit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
# ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp
|
10 |
|
11 |
-
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
|
|
|
|
18 |
name: ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp
|
19 |
base_model: allknowingroger/HomerSlerp6-7B
|
20 |
dtype: bfloat16
|
@@ -25,4 +54,98 @@ models:
|
|
25 |
- model: bunnycore/Qwen2.5-7B-Fuse-Exp
|
26 |
- model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
|
27 |
tokenizer_source: allknowingroger/HomerSlerp6-7B
|
28 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- merge
|
5 |
- mergekit
|
6 |
- lazymergekit
|
7 |
+
- ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
base_model:
|
11 |
+
- allknowingroger/HomerSlerp6-7B
|
12 |
+
- jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0
|
13 |
+
- bunnycore/Blabbertron-1.0
|
14 |
+
- bunnycore/Qwen2.5-7B-Fuse-Exp
|
15 |
+
- Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
library_name: transformers
|
18 |
---
|
19 |
|
20 |
# ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp
|
21 |
|
22 |
+
### π **Overview**
|
23 |
+
**ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp** is a powerful and finely-tuned AI model built on **HomerSlerp6-7B**, with a fusion of **Qwen2.5-7B-based models** to create a unique blend of reasoning, creativity, and enhanced conversational depth. This model is an **experimental fusion** designed to bring **high adaptability**, **deep knowledge**, and **engaging responses** across a wide variety of use cases.
|
24 |
|
25 |
+
---
|
26 |
|
27 |
+
## π **Merge Details**
|
28 |
+
- **Merge Method:** `model_stock`
|
29 |
+
- **Base Model:** [allknowingroger/HomerSlerp6-7B](https://huggingface.co/allknowingroger/HomerSlerp6-7B)
|
30 |
+
- **Data Type:** `bfloat16`
|
31 |
+
- **Tokenizer Source:** `allknowingroger/HomerSlerp6-7B`
|
32 |
+
|
33 |
+
### π **Merged Models**
|
34 |
+
This fusion includes carefully selected models to enhance **general intelligence**, **technical depth**, and **roleplay capabilities**:
|
35 |
+
|
36 |
+
| Model Name | Description |
|
37 |
+
|------------|-------------|
|
38 |
+
| [jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0](https://huggingface.co/jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0) | A knowledge-rich, uncensored model with deep expertise in multiple domains. |
|
39 |
+
| [bunnycore/Blabbertron-1.0](https://huggingface.co/bunnycore/Blabbertron-1.0) | A model optimized for free-flowing and expressive conversation. |
|
40 |
+
| [bunnycore/Qwen2.5-7B-Fuse-Exp](https://huggingface.co/bunnycore/Qwen2.5-7B-Fuse-Exp) | Experimental fusion of Qwen2.5-based models for nuanced understanding. |
|
41 |
+
| [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) | Enhanced context comprehension and complex reasoning capabilities. |
|
42 |
+
|
43 |
+
---
|
44 |
|
45 |
+
## β **Configuration**
|
46 |
+
```yaml
|
47 |
name: ZeroXClem-Qwen2.5-7B-HomerFuse-NerdExp
|
48 |
base_model: allknowingroger/HomerSlerp6-7B
|
49 |
dtype: bfloat16
|
|
|
54 |
- model: bunnycore/Qwen2.5-7B-Fuse-Exp
|
55 |
- model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
|
56 |
tokenizer_source: allknowingroger/HomerSlerp6-7B
|
57 |
+
```
|
58 |
+
|
59 |
+
---
|
60 |
+
|
61 |
+
## π§ **Why This Model?**
|
62 |
+
β
**Balanced Fusion** β A well-calibrated mix of reasoning, factual accuracy, and expressive depth.
|
63 |
+
β
**Uncensored Knowledge** β Suitable for academic, technical, and exploratory conversations.
|
64 |
+
β
**Enhanced Context Retention** β Ideal for long-form discussions and in-depth analysis.
|
65 |
+
β
**Diverse Applications** β Can handle creative writing, roleplay, and problem-solving tasks.
|
66 |
+
|
67 |
+
---
|
68 |
+
|
69 |
+
## π How to Use
|
70 |
+
|
71 |
+
### π₯ Ollama (Quick Inference)
|
72 |
+
|
73 |
+
You can run the model using **Ollama** for direct testing:
|
74 |
+
|
75 |
+
```bash
|
76 |
+
ollama run hf.co/ZeroXClem/Qwen2.5-7B-HomerFuse-NerdExp-Q4_K_M-GGUF
|
77 |
+
```
|
78 |
+
|
79 |
+
### π€ Hugging Face Transformers (Python)
|
80 |
+
|
81 |
+
```python
|
82 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
83 |
+
import torch
|
84 |
+
|
85 |
+
model_name = "ZeroXClem/Qwen2.5-7B-HomerFuse-NerdExp"
|
86 |
+
|
87 |
+
# Load tokenizer & model
|
88 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
89 |
+
model = AutoModelForCausalLM.from_pretrained(
|
90 |
+
model_name,
|
91 |
+
torch_dtype=torch.bfloat16,
|
92 |
+
device_map="auto"
|
93 |
+
)
|
94 |
+
|
95 |
+
# Initialize text generation pipeline
|
96 |
+
text_generator = pipeline(
|
97 |
+
"text-generation",
|
98 |
+
model=model,
|
99 |
+
tokenizer=tokenizer,
|
100 |
+
torch_dtype=torch.bfloat16,
|
101 |
+
device_map="auto"
|
102 |
+
)
|
103 |
+
|
104 |
+
# Example prompt
|
105 |
+
prompt = "Describe the significance of AI ethics in modern technology."
|
106 |
+
|
107 |
+
# Generate output
|
108 |
+
outputs = text_generator(
|
109 |
+
prompt,
|
110 |
+
max_new_tokens=200,
|
111 |
+
do_sample=True,
|
112 |
+
temperature=0.7,
|
113 |
+
top_k=50,
|
114 |
+
top_p=0.95
|
115 |
+
)
|
116 |
+
|
117 |
+
print(outputs[0]["generated_text"])
|
118 |
+
```
|
119 |
+
|
120 |
+
---
|
121 |
+
|
122 |
+
## π **Performance & Benchmarks**
|
123 |
+
This model has been crafted to perform **exceptionally well** across a variety of domains, including reasoning, mathematics, and conversation. Evaluation results will be updated upon testing.
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
## π₯ **Usage Recommendations**
|
128 |
+
For best performance, ensure that you:
|
129 |
+
- Use the correct tokenizer: `allknowingroger/HomerSlerp6-7B`
|
130 |
+
- Fine-tune prompts for logical reasoning with a **step-by-step approach**.
|
131 |
+
- Utilize the model in an interactive setting for **long-form discussions**.
|
132 |
+
|
133 |
+
---
|
134 |
+
|
135 |
+
## π― **Future Plans**
|
136 |
+
- π Further optimization for **multi-turn dialogues** and **zero-shot reasoning**.
|
137 |
+
- π§ Improving knowledge distillation for **factual consistency**.
|
138 |
+
- π Enhancing **character roleplay depth** with better expressiveness.
|
139 |
+
|
140 |
+
---
|
141 |
+
|
142 |
+
## π’ **Feedback & Contributions**
|
143 |
+
This is an **open project**, and your feedback is invaluable!
|
144 |
+
π¬ **Leave a review** or **open a discussion** on [Hugging Face](https://huggingface.co/ZeroXClem).
|
145 |
+
|
146 |
+
---
|
147 |
+
|
148 |
+
### β€οΈ **Acknowledgments**
|
149 |
+
A huge thanks to **ALL the contributors & model creators** and **Hugging Face's mergekit community** for pushing the boundaries of AI model merging!
|
150 |
+
|
151 |
+
---
|