πΉ Tokenizer Included: This repository contains the tokenizer. You can load it directly using:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Jaamie/gemma-mental-health-qlora")
Mental Health Diagnosis and Support Assistant β Model Card
π Model Card for Jaamie/gemma-mental-health-qlora
π§ Model Details
Model Name: Gemma Mental Health QLoRA Assistant
Developed by: Jaamie
Finetuned from: google/gemma-2-9b-it
Model Type: Causal Language Model with QLoRA
Language(s): English
License: Apache 2.0
Frameworks: π€ Transformers, PEFT (v0.14.0), BitsAndBytes
Quantization: 4-bit (bnb_config
)
Adapter Type: LoRA (Rank=8, Ξ±=16)
π Data Sources
This model was fine-tuned on a rich combination of mental healthβrelated datasets from Kaggle:
- 3k Conversations Dataset for Chatbot
- Depression Reddit Cleaned
- Human Stress Prediction
- Predicting Anxiety in Mental Health Data
- Mental Health Dataset Bipolar
- Reddit Mental Health Data
- Students Anxiety and Depression Dataset
- Suicidal Mental Health Dataset
- Suicidal Tweet Detection Dataset
These datasets span various diagnoses like Anxiety, Stress, Depression, Bipolar, Suicidal Ideation, and Personality Disorders.
π Uses
β Direct Use
- Predict user diagnosis (e.g., Anxiety, Depression)
- Retrieve contextually relevant documents via FAISS
- Generate response text including symptoms, precautions, and helpline info
π« Out-of-Scope Use
- Not intended for real-time clinical decision-making
- Not a substitute for licensed mental health professionals
- Not for use on private or sensitive medical data without proper anonymization
β οΈ Bias, Risks, and Limitations
- The model is trained on publicly available mental health datasets and may reflect bias from those sources.
- Predictions and suggestions should be verified by a professional for critical use cases.
- Not fine-tuned for children, multilingual users, or clinical-grade diagnostics.
π How to Get Started
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load PEFT config
peft_config = PeftConfig.from_pretrained("Jaamie/gemma-mental-health-qlora")
# Load base model
base = AutoModelForCausalLM.from_pretrained(
peft_config.base_model_name_or_path,
device_map="auto",
torch_dtype=torch.float16
)
# Load LoRA fine-tuned adapter
model = PeftModel.from_pretrained(base, "Jaamie/gemma-mental-health-qlora")
tokenizer = AutoTokenizer.from_pretrained("Jaamie/gemma-mental-health-qlora")
ποΈ Training Details
π Training Data
- Combined ~52,000 samples
- Balanced subset used: 1500 records per diagnosis (7 categories)
βοΈ Training Procedure
- Quantized 4-bit training using
bitsandbytes
- Fine-tuned using QLoRA via Hugging Face PEFT
- Prompt structure:
User β Diagnosis β Context β Output
π§ Training Hyperparameters
- Epochs: 2
- Batch size: 4
- Gradient Accumulation: 2
- Learning Rate: 2e-5
- Mixed precision: FP16
π§ͺ Evaluation
π¬ Testing Data
- Same structure as training, validation split = 2000 samples
π Metrics
- Epoch 1: Training Loss = 0.685, Validation Loss = 0.99
- Epoch 2: Training Loss = 0.799, Validation Loss = 0.98
β Result Summary
- Model generalizes well across 7 classes
- Retains fluency in text generation using retrieved RAG context
π± Environmental Impact
Component | Value |
---|---|
Hardware Type | A100 (40GB) GPU |
Hours Used | ~3.5 hours |
Cloud Provider | Google Colab Pro |
Region | US |
Carbon Emitted | ~1.1 kg COβ (estimated) |
π Source: Lacoste et al., 2019
π οΈ Technical Specs
- Base Model:
google/gemma-2-9b-it
- LoRA Adapter:
peft==0.14.0
- Embedding Model (RAG):
BAAI/bge-base-en-v1.5
- Retrieval: FAISS (prebuilt index + documents)
π¬ Contact & Contributions
Model Card Author: Jaamie
Contact: [Add your preferred email or Hugging Face profile]
Contributions welcome! Please open issues or pull requests on the associated repo.
π Citation
@misc{gemma_mental_health_qlora,
author = {Jaamie},
title = {Gemma Mental Health Assistant (QLoRA)},
year = {2024},
howpublished = {\url{https://huggingface.co/Jaamie/gemma-mental-health-qlora}},
note = {Fine-tuned with PEFT + RAG on curated Kaggle datasets}
}
Framework versions:
- PEFT: 0.14.0
- Transformers: >=4.39.0
- BitsAndBytes: 0.41.1+
- Python: 3.11+
- Downloads last month
- 104