--- language: - en license: apache-2.0 library_name: transformers tags: - merge - mergekit - lazymergekit base_model: - invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B - ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B - ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B - djuna/L3.1-Purosani-2-8B - ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova pipeline_tag: text-generation model-index: - name: Llama-3.1-8B-SuperNova-EtherealHermes results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 73.39 name: strict accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 32.07 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 17.45 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 5.7 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.32 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.5 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes name: Open LLM Leaderboard --- # ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes ## 🌌 Overview **ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes** is a cutting-edge fusion of top-tier Llama 3.1 models, meticulously crafted to balance **powerful instruction-following**, **immersive storytelling**, and **logical reasoning**. This merge integrates the strengths of **SuperNova**, **EtherealHermes**, and additional high-performance models, resulting in an adaptable and dynamic AI. This model is governed by the **Meta Llama 3.1 Community License Agreement** and is optimized for **long-form generation**, **multi-step reasoning**, and **roleplay applications**. --- ## 🚀 Key Features - **Advanced Instruction Following** – Leverages high-context retention for accurate and logical responses. - **Enhanced Roleplay & Storytelling** – Supports immersive dialogue, lore-building, and dynamic narrative generation. - **Long-Form Content Generation** – Capable of producing detailed, coherent text over extended passages. - **Adaptive Multi-Domain Performance** – Handles research, fiction writing, technical content, and conversation seamlessly. - **Highly Efficient Processing** – Optimized quantization and inference mechanisms ensure smooth deployment. --- ## 🧠 Merged Models This model is the result of a carefully calibrated merge of the following models: - **[djuna/L3.1-Purosani-2-8B](https://huggingface.co/djuna/L3.1-Purosani-2-8B)** – A high-performance Llama 3.1 model emphasizing **instruction-following** and **contextual coherence**. - **[invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B)** – Focuses on **creative storytelling**, **world-building**, and **conversational depth**. - **[ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B](https://huggingface.co/ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B)** – A hybrid powerhouse that integrates **Hermes3**, **SuperNova**, and **Purosani** architectures. - **[ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B](https://huggingface.co/ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B)** – Enhances **multi-step inference**, **logical alignment**, and **long-form composition**. This curated selection ensures the model is equipped with both **technical precision** and **artistic creativity**. --- ## 🔧 Merge Configuration The model was merged using **Model Stock** methodology with **bfloat16** precision to ensure a seamless blend of capabilities. The YAML configuration is as follows: ```yaml # Merge configuration for ZeroXClem-Llama-3.1-8B-SuperNova-EtherealHermes using MODEL STOCK name: ZeroXClem-Llama-3.1-8B-SuperNova-EtherealHermes base_model: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B dtype: bfloat16 merge_method: model_stock models: - model: ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B - model: ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B - model: djuna/L3.1-Purosani-2-8B - model: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova tokenizer_source: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B ``` This ensures **logical coherence**, **creative diversity**, and **robust performance** across various AI tasks. --- ## 🛠 How to Use ### 🔥 Ollama (Quick Inference) You can run the model using **Ollama** for direct testing: ```bash ollama run hf.co/ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes-Q4_K_M-GGUF ``` ### 🤗 Hugging Face Transformers (Python) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model_name = "ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes" # Load tokenizer & model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto" ) # Initialize text generation pipeline text_generator = pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto" ) # Example prompt prompt = "Describe the significance of AI ethics in modern technology." # Generate output outputs = text_generator( prompt, max_new_tokens=200, do_sample=True, temperature=0.7, top_k=50, top_p=0.95 ) print(outputs[0]["generated_text"]) ``` --- ## 📌 Best Practices - **Use System Prompts:** For best performance, add a system instruction before inference: `"Think step by step with logical reasoning before providing any response."` - **Uncensored Mode:** For more unrestricted output, set the system message to `"."` or customize it accordingly. - **Quantization Considerations:** - `Q4` may lead to refusal issues due to loss of fine-tuning alignment. - `F16` or `Q8` is recommended for **optimal inference quality**. --- ## 📜 License This model is released under the **Meta Llama 3.1 Community License Agreement**. ⚠ **Disclaimer:** This model is highly compliant and **uncensored**. It is the user's responsibility to ensure ethical and appropriate usage, especially in public-facing applications. --- ## 💡 Future Improvements - **Enhanced ethical alignment while preserving model capabilities.** - **Further fine-tuning for domain-specific reasoning tasks.** - **Expanded dataset integration for better real-world knowledge representation.** --- ## ❤️ Special Thanks A heartfelt thank you to: - **djuna** for **L3.1-Purosani-2-8B**. - **invisietch** for **L3.1-EtherealRainbow**. - **MergeKit Community** for advancing open-source merging techniques. - The **🤗 Hugging Face & Open-Source AI** ecosystem for continued AI innovation. Your contributions fuel the progress of **next-gen AI models**! 🚀💜 --- ## 📢 Feedback & Contributions If you encounter any issues, have suggestions, or wish to contribute, feel free to open a **discussion** or submit a **pull request**. --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Llama-3.1-8B-SuperNova-EtherealHermes-details) | Metric |Value| |-------------------|----:| |Avg. |28.41| |IFEval (0-Shot) |73.39| |BBH (3-Shot) |32.07| |MATH Lvl 5 (4-Shot)|17.45| |GPQA (0-shot) | 5.70| |MuSR (0-shot) |11.32| |MMLU-PRO (5-shot) |30.50|