GPT-Neo 2.7B Fine-tuned LoRA Adapter for Lyrics Generation
This is a LoRA adapter for EleutherAI/gpt-neo-2.7B that was fine-tuned to generate creative song lyrics based on themes and musical styles.
Model Description
The model has been fine-tuned on a diverse collection of song lyrics to capture various styles, rhyme patterns, and emotional tones. It can generate lyrics when given specific themes, genres, or emotional contexts as prompts.
Model Architecture
- Base model: GPT-Neo 2.7B
- Architecture: Transformer-based autoregressive language model
- Fine-tuning: LoRA (Low-Rank Adaptation) with PEFT
- Parameters: Full model 2.7 billion, adapter weights much smaller
- Context window: 2048 tokens
- Training approach: Parameter-efficient fine-tuning on lyrics dataset
Usage
This is a LoRA adapter model and must be loaded using the PEFT library:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load the base model and tokenizer
base_model = "EleutherAI/gpt-neo-2.7B"
adapter_model = "jacob-c/gptneo-2.7Bloratunning"
tokenizer = AutoTokenizer.from_pretrained(base_model)
base_model = AutoModelForCausalLM.from_pretrained(base_model)
# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, adapter_model)
# Generate lyrics
prompt = "Write lyrics for a song with the following themes: love, summer, memories. The lyrics should be:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs.input_ids,
max_length=300,
temperature=0.9,
top_p=0.93,
top_k=50,
repetition_penalty=1.2,
do_sample=True,
num_return_sequences=1,
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Example Outputs
Prompt: Write lyrics for a song with themes of night, stars, and dreams.
Generated lyrics:
Under the canvas of midnight blue
Stars like diamonds, falling through
Dreams that whisper what might be true
In the quiet night, I think of you
CHORUS:
Starlight dancing in your eyes
Dreams we chase across the skies
Nothing's lost and nothing dies
In this moment frozen in time
LoRA Adapter Details
This model uses Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that significantly reduces the number of trainable parameters by adding pairs of rank-decomposition matrices to existing weights while freezing the original parameters.
LoRA configuration:
- r: 16
- alpha: 32
- Target modules: q_proj, k_proj, v_proj, out_proj
- Dropout: 0.05
Training Process
The model was fine-tuned on lyrics from multiple genres, focusing on:
- Structure and flow
- Rhyme patterns
- Emotional expressiveness
- Thematic coherence
Limitations
- May occasionally generate repetitive phrases
- Quality varies based on the specificity of the prompt
- Sometimes produces lyrics that match popular existing songs too closely
- Works best with clear thematic guidance
Citation
If you use this model in your research or applications, please cite:
@misc{gptneo-2.7Bloratunning,
author = {Jacob C},
title = {GPT-Neo 2.7B Fine-tuned for Lyrics Generation},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/jacob-c/gptneo-2.7Bloratunning}},
}
- Downloads last month
- 21