ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
Overview
ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova is model merge designed to a base for further fine tuning for better natural language understanding and text generation. By combining the best attributes of multiple high-performance models, this fusion allows a highly capable AI with reasoning, compliance, and versatility.
If you want to try the reccomended fine-tuned version of this model, please see here. This model is based on Llama-3.1-8B-Instruct and adheres to the Meta Llama 3.1 Community License Agreement.
๐ Key Features:
- Enhanced Reasoning & Compliance: Optimized for logical step-by-step thinking.
- Balanced Safety & Utility: Capable of nuanced and detailed responses while maintaining ethical constraints.
- Diverse Knowledge Base: A fusion of models specializing in general instruction, reasoning, and domain-specific tasks.
- Superior Performance: Achieves high benchmarks across multiple evaluations.
๐ง Merged Models
This model is a weighted merge of the following:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 โ The foundational model, providing uncensored, high-compliance capabilities.
- mergekit-community/mergekit-della_linear-cwuosuu โ Strengthens logical reasoning and alignment.
- mergekit-community/mergekit-della_linear-nimxtnw โ Enhances multi-step inference and response depth.
- mergekit-community/mergekit-della_linear-vpjjtsa โ Refines contextual understanding and coherence.
๐ง Merge Configuration
The following YAML configuration was used to merge these models using Model Stock, ensuring a balanced and optimized fusion:
name: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
merge_method: model_stock
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: float16
out_dtype: bfloat16
parameters:
normalize: false
int8_mask: true
models:
- model: mergekit-community/mergekit-della_linear-cwuosuu
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-nimxtnw
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-vpjjtsa
parameters:
density: 0.5
weight: 0.5
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.5
weight: 0.5
๐ How to Use
๐ฅ Ollama
For quick inference, you can run the model using Ollama:
ollama run hf.co/ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
๐ค Hugging Face Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
# Define model name
model_name = "ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize text generation pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example prompt
prompt = "Explain the importance of AI alignment in modern society."
# Generate output
outputs = text_generator(
prompt,
max_new_tokens=150,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
๐ Best Practices
Use System Prompts:
For best results, use a system message before inference:"Think step by step with logical reasoning before providing any response."
For More Uncensored Output:
You can set a different system message or simply use"."
as the system prompt.Quantization Considerations:
Q4
may sometimes cause refusals due to loss in fine-tuning.F16
orQ8
are recommended for optimal performance.
๐ License
This model is released under the Meta Llama 3.1 Community License Agreement.
Usage, including commercial applications, must adhere to this license.
โ Warning: This model is uncensored and highly compliant. Ensure proper alignment layers before deploying as a public service.
๐ก Future Improvements
- Further refinement of reasoning capabilities.
- Optimized token alignment for better coherence.
- Additional quantization tuning for efficient deployment.
โค๏ธ Special Thanks
A heartfelt thank you to:
- Orenguteng for Llama-3.1-8B-Lexi-Uncensored-V2.
- MergeKit Community for the powerful della_linear model merges.
- The ๐ค Hugging Face & Open-Source AI community for advancing AI research.
Your contributions make cutting-edge AI development possible! ๐๐
๐ข Feedback & Contributions
If you encounter any issues or have ideas for improvements, feel free to open a discussion or submit a pull request.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 23.30 |
IFEval (0-Shot) | 41.65 |
BBH (3-Shot) | 30.50 |
MATH Lvl 5 (4-Shot) | 25.30 |
GPQA (0-shot) | 4.81 |
MuSR (0-shot) | 11.23 |
MMLU-PRO (5-shot) | 26.31 |
- Downloads last month
- 18
Model tree for ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard41.650
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard30.500
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard25.300
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.810
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.230
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard26.310