Fine-tuning Phi-4-reasoning-plus on FinQA Dataset

This project fine-tunes the microsoft/Phi-4-reasoning-plus model using a medical reasoning dataset (TheFinAI/Fino1_Reasoning_Path_FinQA).


Setup

  1. Install the required libraries:
pip install -U datasets accelerate peft trl bitsandbytes
pip install -U transformers
pip install huggingface_hub[hf_xet]
  1. Authenticate with Hugging Face Hub:

Make sure your Hugging Face token is stored in an environment variable:

export HF_TOKEN=your_huggingface_token

The notebook will automatically log you in using this token.


How to Run

  1. Load the Model and Tokenizer
    The script downloads the full Phi-4-reasoning-plus model.

  2. Prepare the Dataset

    • The notebook uses TheFinAI/Fino1_Reasoning_Path_FinQA (first 1000 samples).
    • It formats each example into an instruction-following prompt with step-by-step chain-of-thought reasoning.
  3. Fine-tuning

    • Fine-tuning is set up with PEFT (LoRA / Adapter Tuning style) to modify a small subset of model parameters.
    • TRL (Transformer Reinforcement Learning) is used to fine-tune efficiently.
  4. Push Fine-tuned Model

    • After training, the fine-tuned model and tokenizer are pushed back to your Hugging Face account.

Here is the training notebook: Fine_tuning_Phi-4-Reasoning-Plus

Model Configuration

  • Base Model: microsoft/Phi-4-reasoning-plus
  • Training: PEFT + TRL
  • Dataset: 1000 examples FinQA reasoning dataset

Notes

  • GPU Required: Make sure you have access to 1X A100s. Get it from RunPod for an hours. Training took only 7 minutes.
  • Environment: The notebook expects an environment where NVIDIA CUDA drivers are available (nvidia-smi check is included).

Example Prompt Format

<|im_start|>system<|im_sep|>
Below is an instruction that describes a task, paired with an input that provides further context. 
Write a response that appropriately completes the request. 
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response.
<|im_end|>
<|im_start|>user<|im_sep|>
{}<|im_end|>
<|im_start|>assistant<|im_sep|>
<think>
{}
</think>
{}

Usage Script (not-tested)

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Base model (original model from Meta)
base_model_id = "microsoft/Phi-4-reasoning-plus"

# Your fine-tuned LoRA adapter repository
lora_adapter_id = "kingabzpro/Phi-4-Reasoning-Plus-FinQA-COT"


# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
)

# Attach the LoRA adapter
model = PeftModel.from_pretrained(
    base_model,
    lora_adapter_id,
    device_map="auto",
    trust_remote_code=True,
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)

# Inference example
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kingabzpro/Phi-4-Reasoning-Plus-FinQA-COT

Base model

microsoft/phi-4
Finetuned
(5)
this model

Dataset used to train kingabzpro/Phi-4-Reasoning-Plus-FinQA-COT

Collection including kingabzpro/Phi-4-Reasoning-Plus-FinQA-COT