ZeroXClem's picture
Update README.md
f3dacbc verified
|
raw
history blame
4.53 kB
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
- invisietch/EtherealRainbow-v0.3-8B
language:
- en
base_model:
- ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
- invisietch/EtherealRainbow-v0.3-8B
pipeline_tag: text-generation
library_name: transformers
---
# ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
## Overview
**ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix** is a powerful fusion of **ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes** and **invisietch/EtherealRainbow-v0.3-8B**, utilizing **SLERP (Spherical Linear Interpolation)** for optimal blending of embeddings. This merge enhances reasoning, contextual understanding, and creative language generation while retaining ethical alignment and responsiveness.
---
## πŸ”₯ **Merged Models**
- **[ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes](#)** - A highly optimized instruction-tuned model, built for nuanced, long-form reasoning.
- **[invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)** - A dynamic conversational model with expanded alignment and expressiveness.
---
## βš™οΈ **Merge Configuration**
The following YAML configuration defines how these models were fused using **SLERP**:
```yaml
# Merge configuration for ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix using SLERP
name: ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix
slices:
- sources:
- model: ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
layer_range: [0, 32]
- model: invisietch/EtherealRainbow-v0.3-8B
layer_range: [0, 32]
merge_method: slerp
base_model: invisietch/EtherealRainbow-v0.3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
### **Why SLERP?**
- **Maintains Model Integrity**: Ensures a smooth transition between feature spaces of both models.
- **Preserves Semantic Meaning**: Avoids interpolation collapse, keeping token embeddings rich in structure.
- **Balanced Performance**: Retains the best qualities from both parent models.
---
## πŸš€ **Capabilities**
### 🌟 **Enhanced Features**
- **Supercharged Instruction Following** – More intuitive and context-aware.
- **Advanced Conversational Flow** – Generates human-like responses with coherence.
- **Creative and Expressive Writing** – Ideal for storytelling, summarization, and content generation.
- **Expanded Knowledge Base** – Merging brings broader factual recall and conceptual understanding.
- **Flexible Alignment** – A balance of compliance and open-ended response generation.
---
## πŸ“₯ **Usage Instructions**
### **Transformers**
You can use the model via Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Sample inference
prompt = "What are the implications of artificial intelligence in the future of education?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7, top_p=0.9)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### **Ollama**
For local execution with Ollama:
```sh
ollama run hf.co/ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
```
---
## πŸ“Œ **Important Notes**
- **License**: Governed by **Meta's Llama 3.1 Community License**.
- **Alignment Considerations**: Users are responsible for ethical and compliant use.
- **System Tokens**: Follows Llama 3.1 tokenization standards for inference stability.
- **Quantization**: **Use FP16 for optimal performance**, though **Q8** quantized versions may be available.
---
## πŸ’œ **Special Thanks**
Deep gratitude to:
- **@invisietch** for EtherealRainbow-v0.3-8B.
- **Hugging Face & Open-Source AI Community** for their incredible contributions. πŸš€πŸ’–
---
## πŸ”— **Resources**
- **[Hugging Face Model Page](#)**
- **[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)**
- **[MergeKit Repository](https://github.com/cg123/mergekit)**
---
**✨ Merged with precision. Optimized for excellence. Experience RainbowLight EtherealMix today! ✨**