File size: 4,528 Bytes
58aec6e
 
 
 
 
 
 
 
5dbbb10
 
 
 
 
 
 
58aec6e
 
5dbbb10
58aec6e
5dbbb10
 
 
 
58aec6e
5dbbb10
 
 
 
 
 
 
 
58aec6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5dbbb10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3dacbc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
- invisietch/EtherealRainbow-v0.3-8B
language:
- en
base_model:
- ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
- invisietch/EtherealRainbow-v0.3-8B
pipeline_tag: text-generation
library_name: transformers
---

# ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix

## Overview
**ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix** is a powerful fusion of **ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes** and **invisietch/EtherealRainbow-v0.3-8B**, utilizing **SLERP (Spherical Linear Interpolation)** for optimal blending of embeddings. This merge enhances reasoning, contextual understanding, and creative language generation while retaining ethical alignment and responsiveness.

---

## πŸ”₯ **Merged Models**
- **[ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes](#)** - A highly optimized instruction-tuned model, built for nuanced, long-form reasoning.
- **[invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)** - A dynamic conversational model with expanded alignment and expressiveness.

---

## βš™οΈ **Merge Configuration**
The following YAML configuration defines how these models were fused using **SLERP**:

```yaml
# Merge configuration for ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix using SLERP

name: ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix
slices:
  - sources:
      - model: ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
        layer_range: [0, 32]
      - model: invisietch/EtherealRainbow-v0.3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: invisietch/EtherealRainbow-v0.3-8B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

```

### **Why SLERP?**
- **Maintains Model Integrity**: Ensures a smooth transition between feature spaces of both models.
- **Preserves Semantic Meaning**: Avoids interpolation collapse, keeping token embeddings rich in structure.
- **Balanced Performance**: Retains the best qualities from both parent models.

---

## πŸš€ **Capabilities**
### 🌟 **Enhanced Features**
- **Supercharged Instruction Following** – More intuitive and context-aware.
- **Advanced Conversational Flow** – Generates human-like responses with coherence.
- **Creative and Expressive Writing** – Ideal for storytelling, summarization, and content generation.
- **Expanded Knowledge Base** – Merging brings broader factual recall and conceptual understanding.
- **Flexible Alignment** – A balance of compliance and open-ended response generation.

---

## πŸ“₯ **Usage Instructions**
### **Transformers**
You can use the model via Hugging Face's `transformers` library:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix"

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Sample inference
prompt = "What are the implications of artificial intelligence in the future of education?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

output = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7, top_p=0.9)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

### **Ollama**
For local execution with Ollama:
```sh
ollama run hf.co/ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
```


---

## πŸ“Œ **Important Notes**
- **License**: Governed by **Meta's Llama 3.1 Community License**.
- **Alignment Considerations**: Users are responsible for ethical and compliant use.
- **System Tokens**: Follows Llama 3.1 tokenization standards for inference stability.
- **Quantization**: **Use FP16 for optimal performance**, though **Q8** quantized versions may be available.

---

## πŸ’œ **Special Thanks**
Deep gratitude to:
- **@invisietch** for EtherealRainbow-v0.3-8B.
- **Hugging Face & Open-Source AI Community** for their incredible contributions. πŸš€πŸ’–

---

## πŸ”— **Resources**
- **[Hugging Face Model Page](#)**
- **[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)**
- **[MergeKit Repository](https://github.com/cg123/mergekit)**

---

**✨ Merged with precision. Optimized for excellence. Experience RainbowLight EtherealMix today! ✨**