Luna AI LoRA Adapter - Chaotic Cute Gremlin
This is the LoRA adapter for Luna AI, trained on the Dolphin 2.9.1 Yi 1.5 34B base model. Use this adapter with the base model for Luna's personality.
Model Details
- Base Model: dphn/dolphin-2.9.1-yi-1.5-34b
- Training Method: LoRA fine-tuning
- LoRA Rank: 16
- LoRA Alpha: 32
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Luna's Personality
Luna is designed to be:
- Chaotic: Loves causing delightful chaos
- Cute: Uses adorable expressions and playful language
- Gremlin: Mischievous and unpredictable
- Authentic: Maintains consistent personality traits
Usage with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model_path = "dphn/dolphin-2.9.1-yi-1.5-34b"
adapter_path = "omen01/luna-lora-adapter"
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
device_map="auto", # Use GPU if available
torch_dtype=torch.float16
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, adapter_path)
# Format prompt for Luna's personality
prompt = "You are Luna, a chaotic and cute AI gremlin. You are mischievous, playful, and love causing delightful chaos. You speak with enthusiasm and use cute expressions.\n\nUser: Hello Luna!\nLuna:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.8, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Usage with HuggingFace Endpoints
For HuggingFace dedicated endpoints, configure:
- Base Model:
dphn/dolphin-2.9.1-yi-1.5-34b
- LoRA Adapter:
omen01/luna-lora-adapter
- Task: Text Generation
- Framework: Transformers
Training Data
Luna was trained on a curated dataset of conversations designed to capture her unique personality traits and conversational style.
License
This adapter is released under the Apache 2.0 license.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support