Text Classification
Transformers
Safetensors
English
bert
mental-health
nlp
depression
anxiety
suicidal
Eval Results
Elite13's picture
Update README.md
f769f03 verified
metadata
language: en
license: apache-2.0
library_name: transformers
tags:
  - mental-health
  - text-classification
  - bert
  - nlp
  - depression
  - anxiety
  - suicidal
datasets:
  - sai1908/Mental_Health_Condition_Classification
  - kamruzzaman-asif/reddit-mental-health-classification
metrics:
  - accuracy
  - loss
model-index:
  - name: bert-finetuned-mental-health
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: sai1908/Mental_Health_Condition_Classification
          type: text
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9656
          - name: Validation Loss
            type: loss
            value: 0.1513

BERT Fine-Tuned for Mental Health Classification

This model is a fine-tuned bert-base-uncased transformer trained to classify text inputs into seven mental health categories. It is designed to support emotional analysis in mental health-related applications by detecting signs of psychological distress in user-generated content.

Try It Out

You can interact with the model in real-time via this Streamlit-powered Hugging Face Space:
👉 Live Demo on Hugging Face Spaces

Datasets Used

  1. sai1908/Mental_Health_Condition_Classification
    Reddit posts from mental health forums
    ~80,000 cleaned entries from the original 100,000

  2. kamruzzaman-asif/reddit-mental-health-classification
    Additional Reddit mental health posts to improve coverage and diversity

Model Overview

  • Base Model: bert-base-uncased
  • Type: Multi-class text classification (7 labels)
  • Framework: Hugging Face Transformers
  • Training Method: Trainer API (PyTorch backend)

Target Labels

  • Anxiety
  • Depression
  • Bipolar
  • Normal
  • Personality Disorder
  • Stress
  • Suicidal

Training Configuration

Parameter Value
Epochs 3
Learning Rate 2e-5
Batch Size 16
Max Token Length 256
Optimizer AdamW
Hardware 2x NVIDIA Tesla T4 GPUs
Total FLOPs 25,605,736,040,851,200

Evaluation Metrics

Metric Value
Accuracy 0.9656
Validation Loss 0.1513
Training Loss 0.0483
Samples/sec 65.354
Training Time ~1.65 hrs

Example Inference

from transformers import pipeline

classifier = pipeline("text-classification", model="Elite13/bert-finetuned-mental-health")

text = "I'm tired of everything. Nothing makes sense anymore."
result = classifier(text)
print(result)