Mental D0C - A Psychotherapy Simulation Model

Mental D0C is a fine-tuned version of the unsloth/Qwen2-7B-Instruct-bnb-4bit model, designed to simulate conversations between a psychotherapist and a patient. This model was developed as a research tool to explore and understand the dynamics of therapeutic dialogue in a controlled, ethical environment.

This model was fine-tuned on a synthetic dataset of over 12,000 Italian therapist-patient dialogues, enabling it to generate context-aware and empathetic responses in a therapeutic style. The fine-tuning was performed using the Unsloth library for high-efficiency training with LoRA.

🔗 GitHub Repository: [Insert your GitHub repository link here]

⚠️ Important Ethical Disclaimer

This is a Research Simulation, Not a Therapist

This model is an experimental tool created for educational and research purposes only. It is NOT a substitute for professional medical or psychological advice, diagnosis, or treatment.

The AI can make mistakes, generate incorrect or inappropriate information, and does not possess the qualifications to provide real therapeutic guidance.

If you are seeking help for your mental health, please contact a qualified healthcare provider or a crisis hotline.

Model Details

  • Base Model: unsloth/Qwen2-7B-Instruct-bnb-4bit
  • Fine-tuning Library: Unsloth
  • Dataset: A synthetic dataset of ~12,000 Italian therapeutic conversations.
  • Language: The model was primarily trained on Italian dialogues but retains the multilingual capabilities of its base model.

How to Use

This model is designed to be used for conversational inference. You can load the GGUF version with tools like Ollama or LM Studio, or use the LoRA adapters with the transformers library in Python.

Here is a basic example of how to run the model with LoRA adapters:

from unsloth import FastLanguageModel
from transformers import TextStreamer

# Load the fine-tuned model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "your-hf-username/Mental-D0C-lora", # Replace with your model name
    max_seq_length = 2048,
    load_in_4bit = True,
    device_map = "auto",
)

# Prepare the conversation
messages = [
    {"role": "user", "content": "I've been feeling really down lately and I don't know why."},
]

input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")

# Generate a response
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
    input_ids=input_ids,
    streamer=text_streamer,
    max_new_tokens=150,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    eos_token_id=tokenizer.eos_token_id
)
Downloads last month
-
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for uruz7/Mental_Healthcare_LLM

Adapter
(4)
this model