Fine-tuning
Collection
Collection of fine-tuned LLMs.
•
27 items
•
Updated
•
1
This project fine-tunes the microsoft/Phi-4-reasoning-plus
model using a medical reasoning dataset (TheFinAI/Fino1_Reasoning_Path_FinQA
).
pip install -U datasets accelerate peft trl bitsandbytes
pip install -U transformers
pip install huggingface_hub[hf_xet]
Make sure your Hugging Face token is stored in an environment variable:
export HF_TOKEN=your_huggingface_token
The notebook will automatically log you in using this token.
Load the Model and Tokenizer
The script downloads the full Phi-4-reasoning-plus model.
Prepare the Dataset
TheFinAI/Fino1_Reasoning_Path_FinQA
(first 1000 samples).Fine-tuning
Push Fine-tuned Model
Here is the training notebook: Fine_tuning_Phi-4-Reasoning-Plus
microsoft/Phi-4-reasoning-plus
nvidia-smi
check is included).<|im_start|>system<|im_sep|>
Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response.
<|im_end|>
<|im_start|>user<|im_sep|>
{}<|im_end|>
<|im_start|>assistant<|im_sep|>
<think>
{}
</think>
{}
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Base model (original model from Meta)
base_model_id = "microsoft/Phi-4-reasoning-plus"
# Your fine-tuned LoRA adapter repository
lora_adapter_id = "kingabzpro/Phi-4-Reasoning-Plus-FinQA-COT"
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
# Attach the LoRA adapter
model = PeftModel.from_pretrained(
base_model,
lora_adapter_id,
device_map="auto",
trust_remote_code=True,
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
# Inference example
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)