๐Ÿงฟ Mythos Prime โ€” A LoRA-Fused GGUF Model

Author: Ronnie Alfaro (@ronniealfaro)
Base model: NousResearch/Hermes-2-Pro-Mistral-7B
Fused using PEFT + merge_and_unload, converted with llama.cpp.


๐Ÿ“œ Model Description

Mythos Prime is a fine-tuned fusion of the powerful Hermes 2 Pro (Mistral 7B) base model with a LoRA adapter trained on mythological, symbolic, and philosophical prompts across global traditions.

This model blends Hermesโ€™ natural coherence with a deep narrative capacity focused on:

  • ๐Ÿง  Storytelling with archetypal structure
  • ๐Ÿ“š Myth interpretation and symbolic exploration
  • ๐Ÿ’ฌ Philosophical discourse generation
  • ๐ŸŽฎ Lore-rich worldbuilding for fiction & games
  • โœ๏ธ Creative writing inspired by ancient motifs

๐Ÿ› ๏ธ Technical Specs

Feature Value
Base model NousResearch/Hermes-2-Pro-Mistral-7B
Adapter type LoRA (peft_type: LORA)
Merge method merge_and_unload() via PEFT
Output formats .gguf (f16) + .gguf (Q4_K_M)
Tokenizer Hermes original tokenizer
Context length 4096 tokens

๐Ÿ” Files Included

Filename Description
mythos.gguf Fused full-precision model (f16)
mythos.Q4_K_M.gguf Quantized version (Q4_K_M)
tokenizer.model/json Tokenizer for Hermes
config.json Base model config

๐Ÿงช Intended Use

Mythos Prime was created for use in:

  • Writing and storytelling based on mythic structures
  • Narrative AI assistants with symbolic insight
  • Philosophical or metaphysical exploration
  • Games and RPG tools for generating lore, pantheons, or quests
  • Worldbuilding that draws from multiple cultural traditions

โš ๏ธ Limitations

  • May produce abstract, symbolic, or obscure outputs โ€” this is by design
  • Not tuned for factual accuracy or real-time knowledge
  • May reflect cultural biases from training data

๐Ÿ—ฟ Credits & Lore

Fused by Ronnie Alfaro, after wrestling with Axolotl, its examples.yaml minion, the import-loop cyclops, and the blue screen oracle of pain.

This model is a techno-mystical artifact of patience, recursion, and narrative passion.


๐Ÿ”ฎ License

Follows the base model license: Apache 2.0


Downloads last month
9
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support