Text Generation
Transformers
Safetensors
English
llama
Merge
mergekit
lazymergekit
mergekit-community/mergekit-della_linear-cwuosuu
mergekit-community/mergekit-della_linear-nimxtnw
mergekit-community/mergekit-della_linear-vpjjtsa
Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
conversational
Eval Results
text-generation-inference
File size: 9,023 Bytes
5a4a954 65b946d 5a4a954 65b946d 5a4a954 c25f578 65b946d 5a4a954 c25f578 5a4a954 c25f578 7431f9a 5a4a954 057dbc8 c25f578 5a4a954 c25f578 5a4a954 c25f578 5a4a954 c25f578 5a4a954 c25f578 65b946d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- mergekit
- lazymergekit
- mergekit-community/mergekit-della_linear-cwuosuu
- mergekit-community/mergekit-della_linear-nimxtnw
- mergekit-community/mergekit-della_linear-vpjjtsa
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
base_model:
- mergekit-community/mergekit-della_linear-cwuosuu
- mergekit-community/mergekit-della_linear-nimxtnw
- mergekit-community/mergekit-della_linear-vpjjtsa
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
pipeline_tag: text-generation
model-index:
- name: Llama-3.1-8B-SuperTulu-LexiNova
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 41.65
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 25.3
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.81
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.31
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
name: Open LLM Leaderboard
---
# ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
## Overview
ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova is model merge designed to a base for further fine tuning for better natural language understanding and text generation. By combining the best attributes of multiple high-performance models, this fusion allows a highly capable AI with reasoning, compliance, and versatility.
If you want to try the reccomended **fine-tuned** version of this model, please see [here](https://huggingface.co/ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes).
This model is based on **Llama-3.1-8B-Instruct** and adheres to the **Meta Llama 3.1 Community License Agreement**.
## π Key Features:
- **Enhanced Reasoning & Compliance**: Optimized for logical step-by-step thinking.
- **Balanced Safety & Utility**: Capable of nuanced and detailed responses while maintaining ethical constraints.
- **Diverse Knowledge Base**: A fusion of models specializing in general instruction, reasoning, and domain-specific tasks.
- **Superior Performance**: Achieves high benchmarks across multiple evaluations.
## π§ Merged Models
This model is a weighted merge of the following:
- **[Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)** β The foundational model, providing uncensored, high-compliance capabilities.
- **[mergekit-community/mergekit-della_linear-cwuosuu](https://huggingface.co/mergekit-community/mergekit-della_linear-cwuosuu)** β Strengthens logical reasoning and alignment.
- **[mergekit-community/mergekit-della_linear-nimxtnw](https://huggingface.co/mergekit-community/mergekit-della_linear-nimxtnw)** β Enhances multi-step inference and response depth.
- **[mergekit-community/mergekit-della_linear-vpjjtsa](https://huggingface.co/mergekit-community/mergekit-della_linear-vpjjtsa)** β Refines contextual understanding and coherence.
## π§ Merge Configuration
The following **YAML** configuration was used to merge these models using **Model Stock**, ensuring a balanced and optimized fusion:
```yaml
name: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
merge_method: model_stock
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: float16
out_dtype: bfloat16
parameters:
normalize: false
int8_mask: true
models:
- model: mergekit-community/mergekit-della_linear-cwuosuu
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-nimxtnw
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-vpjjtsa
parameters:
density: 0.5
weight: 0.5
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.5
weight: 0.5
```
---
## π How to Use
### π₯ Ollama
For quick inference, you can run the model using **Ollama**:
```sh
ollama run hf.co/ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
```
### π€ Hugging Face Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
# Define model name
model_name = "ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize text generation pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example prompt
prompt = "Explain the importance of AI alignment in modern society."
# Generate output
outputs = text_generator(
prompt,
max_new_tokens=150,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
```
---
## π Best Practices
- **Use System Prompts:**
For best results, use a system message before inference:
`"Think step by step with logical reasoning before providing any response."`
- **For More Uncensored Output:**
You can set a different system message or simply use `"."` as the system prompt.
- **Quantization Considerations:**
- `Q4` may sometimes cause refusals due to loss in fine-tuning.
- `F16` or `Q8` are recommended for optimal performance.
---
## π License
This model is released under the **Meta Llama 3.1 Community License Agreement**.
Usage, including commercial applications, must adhere to this license.
β **Warning:** This model is uncensored and highly compliant. Ensure proper alignment layers before deploying as a public service.
---
## π‘ Future Improvements
- Further refinement of reasoning capabilities.
- Optimized token alignment for better coherence.
- Additional quantization tuning for efficient deployment.
---
## β€οΈ Special Thanks
A heartfelt thank you to:
- **Orenguteng** for **Llama-3.1-8B-Lexi-Uncensored-V2**.
- **MergeKit Community** for the powerful **della_linear** model merges.
- The **π€ Hugging Face & Open-Source AI** community for advancing AI research.
Your contributions make cutting-edge AI development possible! ππ
---
## π’ Feedback & Contributions
If you encounter any issues or have ideas for improvements, feel free to open a discussion or submit a pull request.
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Llama-3.1-8B-SuperTulu-LexiNova-details)
| Metric |Value|
|-------------------|----:|
|Avg. |23.30|
|IFEval (0-Shot) |41.65|
|BBH (3-Shot) |30.50|
|MATH Lvl 5 (4-Shot)|25.30|
|GPQA (0-shot) | 4.81|
|MuSR (0-shot) |11.23|
|MMLU-PRO (5-shot) |26.31|
|