ali1001's picture
Update README.md
a00c080 verified
metadata
library_name: transformers
tags:
  - MentalHealth
  - Counseling
  - Chatbot
  - InstructionTuning
  - QLoRA
datasets:
  - Amod/mental_health_counseling_conversations
language:
  - en
base_model:
  - TinyLlama/TinyLlama-1.1B-Chat
pipeline_tag: text-generation

Model Card: Mental Health Counselor Chatbot (TinyLlama-1.1B)

This model is a lightweight mental health chatbot built on TinyLlama-1.1B-Chat, fine-tuned using QLoRA on the Amod/mental_health_counseling_conversations dataset.

⚠️ Note: This model was fine-tuned on Google Colab (free T4 GPU) for only 1 epoch, intended as a test to evaluate the ability of TinyLlama to respond to counseling prompts.
🧠 Performance can significantly improve with longer training, more data, and better hyperparameter tuning.


Model Details

Model Description

⚠️ This is a prototype model. It was fine-tuned using only 1 epoch on a small sample dataset for demonstration and testing purposes.


Uses

Direct Use

For generating supportive and empathetic responses to mental health-related user inputs. Useful for:

  • Mental health Q&A bots
  • Conversational agents in wellness apps

Out-of-Scope Use

  • Not a substitute for licensed therapy.
  • Should not be used for clinical decisions or crisis support.

Bias, Risks & Limitations

  • The model may produce biased or generic responses.
  • Only trained on one small dataset, so coverage is limited.
  • May hallucinate or offer vague advice if prompted outside of its domain.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ali1001/mental-health-tinyllama-bot")
tokenizer = AutoTokenizer.from_pretrained("ali1001/mental-health-tinyllama-bot")

prompt = "I'm feeling very anxious lately and can't sleep. What should I do?"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(output[0], skip_special_tokens=True))