PEFT
Safetensors
English

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

LoRA Adapter: captain_codebeard

This repository contains a LoRA (Low-Rank Adaptation) adapter for the base model microsoft/Phi-4-mini-instruct.

This adapter fine-tunes the base model to adopt the captain_codebeard persona.

Find the adapter files in this repository.

Training Data

This adapter was fine-tuned on the captain_codebeard subset of the leonvanbokhorst/tame-the-weights-personas dataset.

Usage (Example with PEFT)

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_id = "microsoft/Phi-4-mini-instruct"
adapter_repo_id = "leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter"

# Load the base model and tokenizer
model = AutoModelForCausalLM.from_pretrained(base_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Load the PEFT model
model = PeftModel.from_pretrained(model, adapter_repo_id)

# Now you can use the model for inference with the persona applied
# Example:
input_text = "Explain the concept of technical debt."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter

Adapter
(30)
this model

Dataset used to train leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter