metadata
base_model: unsloth/Qwen3-1.7B
library_name: peft
license: mit
datasets:
- Akhil-Theerthala/PersonalFinance_v2
language:
- en
pipeline_tag: text-generation
tags:
- finance
- transformers
- unsloth
- trl
Model Details
This model is fine-tuned for instruction-following in the domain of personal finance, with a focus on:
- Budgeting advice
- Investment strategies
- Credit management
- Retirement planning
- Insurance and financial planning concepts
- Personalized financial reasoning
Model Description
- License: MIT
- Finetuned from model: unsloth/Qwen3-1.7B
- Dataset: The model was fine-tuned on the PersonalFinance_v2 dataset, curated and published by Akhil-Theerthala.
Model Capabilities
- Understands and provides contextual financial advice based on user queries.
- Responds in a chat-like conversational format.
- Trained to follow multi-turn instructions and deliver clear, structured, and accurate financial reasoning.
- Generalizes well to novel personal finance questions and explanations.
Uses
Direct Use
- Chatbots for personal finance
- Educational assistants for financial literacy
- Decision support for simple financial planning
- Interactive personal finance Q&A systems
Bias, Risks, and Limitations
- Not a substitute for licensed financial advisors.
- The model's advice is based on training data and may not reflect region-specific laws, regulations, or financial products.
- May occasionally hallucinate or give generic responses in ambiguous scenarios.
- Assumes user input is well-formed and relevant to personal finance.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}
)
model = PeftModel.from_pretrained(base_model,"khazarai/Personal-Finance-R1")
question ="""
$19k for a coding bootcamp
Hi!
I was just accepted into the full-time software engineering program with Flatiron and have approx. $0 to my name.
I know I can get a loan with either Climb or accent with around 6.50% interest, is this a good option?
I would theoretically be paying near $600/month.
I really enjoy coding and would love to start a career in tech but the potential $19k price tag is pretty scary. Any advice?
"""
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2048,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
For pipeline:
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B")
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-1.7B")
model = PeftModel.from_pretrained(base_model, "khazarai/Personal-Finance-R1")
question ="""
$19k for a coding bootcamp
Hi!
I was just accepted into the full-time software engineering program with Flatiron and have approx. $0 to my name.
I know I can get a loan with either Climb or accent with around 6.50% interest, is this a good option?
I would theoretically be paying near $600/month.
I really enjoy coding and would love to start a career in tech but the potential $19k price tag is pretty scary. Any advice?
"""
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{"role": "user", "content": question}
]
pipe(messages)
Training Details
Training Data
Dataset Overview: PersonalFinance_v2 is a collection of high-quality instruction-response pairs focused on personal finance topics. It covers a wide range of subjects including budgeting, saving, investing, credit management, retirement planning, insurance, and financial literacy.
Data Format: The dataset consists of conversational-style prompts paired with detailed and well-structured responses. It is formatted to enable instruction-following language models to understand and generate coherent financial advice and reasoning.
Framework versions
- PEFT 0.14.0