Intent Fallback Classifier (MindPadi)
This model serves as a lightweight fallback intent classifier for the MindPadi mental health assistant. It is designed to handle ambiguous or unrecognized user inputs when the primary intent classifier fails or yields low confidence. It helps maintain robustness in routing user messages to the appropriate modules in the chatbot backend.
π§ Model Summary
- Model Type: Transformer-based classifier (BERT or DistilBERT variant)
- Task: Text classification (intent prediction)
- Primary Purpose: Backup routing in case of low-confidence from main classifier
- Size: Lightweight (optimized for low-latency inference)
- Files:
config.json
pytorch_model.bin
ormodel.safetensors
tokenizer.json
,vocab.txt
π§Ύ Intended Use
βοΈ Use Cases
- Predicting fallback intents like
"unknown"
,"help"
,"clarify"
, etc. - Activating clarification routines or default flows when the main intent classifier is uncertain
- Used in
app/chatbot/intent_classifier.py
asfallback_model.predict(...)
π« Not Recommended For
- Serving as the primary intent classifier for all inputs
- Handling highly nuanced or multi-intent queries
- Clinical or domain-specific intent disambiguation
ποΈββοΈ Training Details
Dataset: Internal fallback samples and "unknown intent" examples
- Location:
training/datasets/fallback_intents.json
- Includes mislabeled, ambiguous, or noisy utterances
- Location:
Script:
training/train_intent_classifier.py
(with fallback mode enabled)Classes:
"unknown"
"clarify"
"greeting"
"exit"
"irrelevant"
"default"
Hyperparameters
- Model:
bert-base-uncased
or similar - Batch Size: 16
- Learning Rate: 3e-5
- Epochs: 3β4
- Max Sequence Length: 128
π Evaluation
- Accuracy: ~91% on test set of ambiguous intent queries
- F1-Score (unknown intent): 0.94
- Confusion Matrix: Available in
logs/fallback_intent_eval.png
- Performance Benchmark: Inference latency < 50ms on CPU
π¬ Example Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "mindpadi/intent_fallback"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
inputs = tokenizer("I'm not sure what I need", return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim=1)
print(predicted_class.item()) # e.g., 0 => "unknown"
π§ Integration in MindPadi
This model is used as:
- A safety net in
app/chatbot/intent_classifier.py
- Part of the router fallback in
app/chatbot/intent_router.py
- Optional validation in LangGraph workflows for ambiguous queries
β οΈ Limitations
- May overgeneralize rare intents as
"unknown"
- Trained on a relatively small fallback dataset
- May require manual thresholds for activation in hybrid systems
- English-only
π§ͺ Deployment (Optional)
For real-time inference via Hugging Face Inference Endpoints:
import requests
api_url = "https://<your-endpoint>.hf.space"
headers = {"Authorization": f"Bearer <your-token>", "Content-Type": "application/json"}
payload = {"inputs": "I'm not sure what I need"}
response = requests.post(api_url, headers=headers, json=payload)
print(response.json())
π License
MIT License β Open for personal and commercial use with attribution.
π¬ Contact
- Project: MindPadi AI
- Team: MindPadi Developers
- Email: [[email protected]]
- GitHub: [https://github.com/mindpadi]
Last updated: May 2025
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for mindpadi/intent_fallback
Base model
google-bert/bert-base-uncased