π§ Intent Classifier (MindPadi)
The intent_classifier
is a transformer-based text classification model trained to detect user intents in a mental health support setting. It powers the MindPadi assistant's ability to route conversations to the appropriate modulesβlike emotional support, scheduling, reflection, or journal analysisβbased on the userβs message.
π Model Overview
- Model Architecture: DistilBERT (uncased) + classification head
- Task: Intent Classification
- Classes: Over 20 intent categories (e.g.,
vent
,gratitude
,help_request
,journal_analysis
) - Model Size: ~66M parameters
- Files:
config.json
pytorch_model.bin
ormodel.safetensors
tokenizer_config.json
,vocab.txt
,tokenizer.json
checkpoint-*/
(optional training checkpoints)
β Intended Use
βοΈ Use Cases
- Detecting user intent in MindPadi mental health conversations
- Enabling context-specific dialogue flows
- Assisting with journal entry triage and tagging
- Triggering therapy-related tools (e.g., emotion check-ins, PubMed summaries)
π« Not Intended For
- Multilingual intent classification (English-only)
- Legal or medical diagnosis tasks
- Multi-label classification (currently single-label per input)
π‘ Example Intents Detected
Intent | Description |
---|---|
vent |
User expressing frustration or emotion freely |
help_request |
Seeking mental health support |
schedule_session |
Booking a therapy check-in |
gratitude |
Showing appreciation for support |
journal_analysis |
Submitting a journal entry for AI feedback |
reflection |
Talking about personal growth or setbacks |
not_sure |
Unsure or unclear message from user |
π οΈ Training Details
- Base Model:
distilbert-base-uncased
- Dataset: Curated and annotated conversations (
training/datasets/finetuned/intents/
) - Script:
training/train_intent_classifier.py
- Preprocessing:
- Text normalization (lowercasing, punctuation removal)
- Label encoding
- Loss: CrossEntropyLoss
- Metrics: Accuracy, F1-score
- Tokenizer: WordPiece (DistilBERT tokenizer)
π Evaluation
Metric | Score |
---|---|
Accuracy | 91.3% |
F1-score | 89.8% |
Recall@3 | 97.1% |
Precision | 88.4% |
Evaluation performed on a held-out validation split of MindPadi intent dataset.
π Example Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("mindpadi/intent_classifier")
tokenizer = AutoTokenizer.from_pretrained("mindpadi/intent_classifier")
text = "Iβm struggling with my emotions today"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print("Predicted intent ID:", predicted_class)
To map intent ID β label
, load your label encoder from:
from joblib import load
label_encoder = load("intent_encoder/label_encoder.joblib")
print("Predicted intent:", label_encoder.inverse_transform([predicted_class])[0])
π Inference Endpoint Example
import requests
API_URL = "https://api-inference.huggingface.co/models/mindpadi/intent_classifier"
headers = {"Authorization": f"Bearer <your-api-token>"}
payload = {"inputs": "Can I book a mental health session?"}
response = requests.post(API_URL, headers=headers, json=payload)
print(response.json())
β οΈ Limitations
- Not robust to long-form texts (>256 tokens); truncate or summarize input.
- May confuse overlapping intents like
vent
andhelp_request
- False positives possible in vague or sarcastic inputs
- Requires pairing with fallback model (
intent_fallback
) for reliability
π Ethical Considerations
- This model is for supportive routing, not clinical diagnosis
- Use with user consent and proper data privacy safeguards
- Intent predictions should not override human judgment in sensitive contexts
π Integration Points
Location | Functionality |
---|---|
app/chatbot/intent_classifier.py |
Main classifier logic |
app/chatbot/intent_router.py |
Routes based on predicted intent |
app/utils/embedding_search.py |
Uses intent_encoder for similarity fallback |
data/processed_intents.json |
Annotated intent samples |
π License
MIT License β freely available for commercial and non-commercial use.
π¬ Contact
- Team: MindPadi AI Developers
- Profile: https://huggingface.co/mindpadi
- Email: [[email protected]]
Last updated: May 2025
- Downloads last month
- 87
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for mindpadi/intent_classifier
Base model
distilbert/distilbert-base-uncased