Gemma Counseling Chat Model

A fine-tuned Gemma-2B model specialized for empathetic mental health counseling conversations. This model provides supportive, understanding responses to mental health and emotional well-being queries.

⚠️ Important Disclaimer

This model is for educational and research purposes only. It should NOT be used as a replacement for professional mental health services. If you or someone you know is in crisis or needs professional help, please contact:

  • National Suicide Prevention Lifeline: 988 or 1-800-273-8255
  • Crisis Text Line: Text HOME to 741741
  • International Crisis Lines: findahelpline.com

Model Details

Model Description

  • Developed by: ThatITGuy
  • Model type: Causal Language Model (Instruction-tuned)
  • Language(s): English
  • License: Gemma Terms of Use
  • Finetuned from: unsloth/gemma-2b-it-bnb-4bit
  • Training Framework: Unsloth + LoRA

Model Architecture

  • Base Model: Google Gemma-2B-IT
  • Quantization: 4-bit (during training)
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
    • LoRA Rank: 16
    • LoRA Alpha: 16
    • LoRA Dropout: 0.05
    • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Training Details

  • Dataset: Amod/mental_health_counseling_conversations
  • Training Steps: 100 (demonstration purposes - consider longer training for production)
  • Batch Size: 2 (with gradient accumulation steps of 4, effective batch size: 8)
  • Learning Rate: 2e-4
  • Optimizer: AdamW 8-bit
  • LR Scheduler: Cosine
  • Max Sequence Length: 2048 tokens

Usage

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    "ThatITGuy/gemma-counseling-chat-model",
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("ThatITGuy/gemma-counseling-chat-model")

# Prepare your prompt
user_message = "I'm feeling overwhelmed with work stress. Any advice?"
prompt = f"<start_of_turn>user\n{user_message}<end_of_turn>\n<start_of_turn>model\n"

# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
    repetition_penalty=1.1
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("<start_of_turn>model\n")[-1])

Using with Pipeline

from transformers import pipeline

# Create a text generation pipeline
pipe = pipeline(
    "text-generation", 
    model="ThatITGuy/gemma-counseling-chat-model",
    torch_dtype=torch.float16,
    device_map="auto"
)

# Format and generate
prompt = "<start_of_turn>user\nI've been having trouble sleeping lately.<end_of_turn>\n<start_of_turn>model\n"
response = pipe(prompt, max_new_tokens=256, temperature=0.7)[0]['generated_text']

Prompt Template

Always use the Gemma conversation format:

<start_of_turn>user
{user_message}<end_of_turn>
<start_of_turn>model
{model_response}<end_of_turn>

Recommended Generation Parameters

generation_config = {
    "max_new_tokens": 256,
    "temperature": 0.7,      # Adjust for more/less creative responses
    "top_p": 0.9,
    "top_k": 50,
    "repetition_penalty": 1.1,
    "do_sample": True,
    "pad_token_id": tokenizer.eos_token_id
}

Example Conversations

Example 1: Stress Management

User: "I'm feeling overwhelmed with work stress. Any advice?"

Model: "I understand that work stress can be really challenging. Here are some strategies that might help: Try breaking large tasks into smaller, manageable steps. Take regular short breaks to breathe deeply and reset. Consider setting boundaries between work and personal time. Remember, it's okay to ask for help when needed. What specific aspect of work is causing you the most stress?"

Example 2: Communication Skills

User: "I don't know how to tell someone how I feel about them."

Model: "Expressing feelings can be difficult, and it's completely normal to feel uncertain. Start by being honest with yourself about what you want to communicate. Choose a comfortable, private setting for the conversation. Use 'I' statements to express your feelings without placing blame. Remember, being vulnerable takes courage, and the right person will appreciate your honesty."

Limitations and Biases

  1. Training Data Limitations: The model was trained on a limited dataset and for only 100 steps, which may affect response quality and diversity.

  2. Not a Healthcare Professional: This model cannot diagnose, treat, or provide medical advice. It's designed for supportive conversation only.

  3. Potential Biases: The model may reflect biases present in the training data, including cultural, demographic, or therapeutic approach biases.

  4. Context Limitations: The model has a maximum context length of 2048 tokens and may lose track of very long conversations.

  5. Crisis Handling: While trained on counseling conversations, the model is NOT equipped to handle crisis situations appropriately.

Ethical Considerations

  • Privacy: Never share personal identifying information in conversations
  • Professional Boundaries: This model is not a replacement for professional therapy
  • Crisis Situations: Always direct users in crisis to appropriate professional resources
  • Informed Use: Users should understand this is an AI model, not a human counselor

Future Improvements

  • Extended training with more diverse counseling datasets
  • Implementation of safety filters for crisis detection
  • Multi-turn conversation optimization
  • Validation with mental health professionals
  • Integration of evidence-based therapeutic techniques

Citation

If you use this model in your research, please cite:

@misc{gemma-counseling-chat-2024,
  author = {ThatITGuy},
  title = {Gemma Counseling Chat Model},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/ThatITGuy/gemma-counseling-chat-model}
}

Acknowledgments

  • Google for the Gemma base model
  • Unsloth AI for the efficient fine-tuning framework
  • Amod for the mental health counseling conversations dataset
  • The open-source community for continuous support and feedback

Contact

For questions, feedback, or concerns about this model, please open an issue in the Community tab or contact through Hugging Face.


Remember: If you're struggling with mental health, please reach out to a qualified professional. You deserve support and care. 💙

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ThatITGuy/gemma-counseling-chat-model

Adapter
(8)
this model

Dataset used to train ThatITGuy/gemma-counseling-chat-model