lumicare-lora / README.md
jmz365's picture
Update README.md
becf413 verified
---
base_model: microsoft/DialoGPT-small
library_name: peft
---
# Model Card for jmz365/lumicare-lora
**Lumicare‑LoRA** is a set of LoRA adapters trained to turn DialoGPT‑small into a supportive, therapeutic‐style mental‑health chatbot. It was fine‑tuned on a synthetic, slot‑expanded counselling dataset covering anxiety, depression, stress, relationships, self‑esteem, trauma, crisis intervention, and basic greetings.
---
## Model Details
### Model Description
Lumicare‑LoRA adds a lightweight adapter (≈1.6 M parameters) on top of the 117 M‑parameter `microsoft/DialoGPT-small` base, teaching it to respond in a compassionate, context‑aware style. The adapter was trained for 10 epochs with an effective batch size of 32, a learning rate of 2 × 10⁻⁴, and LoRA hyperparameters r=16, α=32, dropout=0.05.
- **Developed by:** Jamal (`jmz365`)
- **Model type:** Causal language model (adapter only)
- **Language:** English
- **License:** MIT
- **Finetuned from:** `microsoft/DialoGPT-small`
### Model Sources
- **Repository:** https://huggingface.co/jmz365/lumicare-lora
- **Training script:** [`training_model.py`](https://github.com/jmz365/LumiCare/blob/main/finetune/training_model.py)
- **Data generator:** [`generate_dialogs.py`](https://github.com/jmz365/LumiCare/blob/main/generate_dialogs.py)
---
## Uses
### Direct Use
Load the adapter into a Hugging Face pipeline and generate empathetic responses:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
repo_id = "jmz365/lumicare-lora"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
torch_dtype=torch.float16,
device_map="auto"
)
gen = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
prompt = (
"<|assistant|> You are a supportive mental‑health coach. "
"Please respond clearly and compassionately. <|end|>\n"
"<|user|> I've been feeling anxious lately and can't sleep. <|end|>\n"
"<|assistant|>"
)
print(gen(prompt, max_new_tokens=64, temperature=0.7, top_p=0.8))