๐งฟ Mythos Prime โ A LoRA-Fused GGUF Model
Author: Ronnie Alfaro (@ronniealfaro)
Base model: NousResearch/Hermes-2-Pro-Mistral-7B
Fused using PEFT + merge_and_unload
, converted with llama.cpp.
๐ Model Description
Mythos Prime is a fine-tuned fusion of the powerful Hermes 2 Pro (Mistral 7B) base model with a LoRA adapter trained on mythological, symbolic, and philosophical prompts across global traditions.
This model blends Hermesโ natural coherence with a deep narrative capacity focused on:
- ๐ง Storytelling with archetypal structure
- ๐ Myth interpretation and symbolic exploration
- ๐ฌ Philosophical discourse generation
- ๐ฎ Lore-rich worldbuilding for fiction & games
- โ๏ธ Creative writing inspired by ancient motifs
๐ ๏ธ Technical Specs
Feature | Value |
---|---|
Base model | NousResearch/Hermes-2-Pro-Mistral-7B |
Adapter type | LoRA (peft_type: LORA ) |
Merge method | merge_and_unload() via PEFT |
Output formats | .gguf (f16) + .gguf (Q4_K_M) |
Tokenizer | Hermes original tokenizer |
Context length | 4096 tokens |
๐ Files Included
Filename | Description |
---|---|
mythos.gguf |
Fused full-precision model (f16) |
mythos.Q4_K_M.gguf |
Quantized version (Q4_K_M) |
tokenizer.model/json |
Tokenizer for Hermes |
config.json |
Base model config |
๐งช Intended Use
Mythos Prime was created for use in:
- Writing and storytelling based on mythic structures
- Narrative AI assistants with symbolic insight
- Philosophical or metaphysical exploration
- Games and RPG tools for generating lore, pantheons, or quests
- Worldbuilding that draws from multiple cultural traditions
โ ๏ธ Limitations
- May produce abstract, symbolic, or obscure outputs โ this is by design
- Not tuned for factual accuracy or real-time knowledge
- May reflect cultural biases from training data
๐ฟ Credits & Lore
Fused by Ronnie Alfaro, after wrestling with Axolotl, its examples.yaml
minion, the import-loop cyclops, and the blue screen oracle of pain.
This model is a techno-mystical artifact of patience, recursion, and narrative passion.
๐ฎ License
Follows the base model license: Apache 2.0
- Downloads last month
- 9
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support