Model Card for jmz365/lumicare-lora

Lumicare‑LoRA is a set of LoRA adapters trained to turn DialoGPT‑small into a supportive, therapeutic‐style mental‑health chatbot. It was fine‑tuned on a synthetic, slot‑expanded counselling dataset covering anxiety, depression, stress, relationships, self‑esteem, trauma, crisis intervention, and basic greetings.


Model Details

Model Description

Lumicare‑LoRA adds a lightweight adapter (≈1.6 M parameters) on top of the 117 M‑parameter microsoft/DialoGPT-small base, teaching it to respond in a compassionate, context‑aware style. The adapter was trained for 10 epochs with an effective batch size of 32, a learning rate of 2 × 10⁻⁴, and LoRA hyperparameters r=16, α=32, dropout=0.05.

  • Developed by: Jamal (jmz365)
  • Model type: Causal language model (adapter only)
  • Language: English
  • License: MIT
  • Finetuned from: microsoft/DialoGPT-small

Model Sources


Uses

Direct Use

Load the adapter into a Hugging Face pipeline and generate empathetic responses:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

repo_id = "jmz365/lumicare-lora"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
    repo_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
gen = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)

prompt = (
    "<|assistant|> You are a supportive mental‑health coach. "
    "Please respond clearly and compassionately. <|end|>\n"
    "<|user|> I've been feeling anxious lately and can't sleep. <|end|>\n"
    "<|assistant|>"
)
print(gen(prompt, max_new_tokens=64, temperature=0.7, top_p=0.8))
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jmz365/lumicare-lora

Adapter
(13)
this model