SamanthaStorm commited on
Commit
be891ca
·
verified ·
1 Parent(s): 4743191

Upload Healthy Boundary Predictor v1.0 - 100% accuracy

Browse files
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ library_name: transformers
5
+ tags:
6
+ - boundary-detection
7
+ - mental-health
8
+ - communication
9
+ - text-classification
10
+ - psychology
11
+ datasets:
12
+ - custom
13
+ metrics:
14
+ - accuracy
15
+ - f1
16
+ model-index:
17
+ - name: healthy-boundary-predictor
18
+ results:
19
+ - task:
20
+ type: text-classification
21
+ name: Boundary Health Classification
22
+ metrics:
23
+ - type: accuracy
24
+ value: 1.0
25
+ name: Accuracy
26
+ - type: f1
27
+ value: 1.0
28
+ name: F1 Score
29
+ ---
30
+
31
+ # Healthy Boundary Predictor 🛡️
32
+
33
+ A fine-tuned DistilBERT model for detecting healthy vs unhealthy boundaries in text communication.
34
+
35
+ ## Model Description
36
+
37
+ This model analyzes text to determine whether communication patterns reflect healthy or unhealthy boundaries. It's designed to help identify:
38
+
39
+ - **Healthy Boundaries**: Clear communication, mutual respect, appropriate assertiveness
40
+ - **Unhealthy Boundaries**: Manipulation, coercion, dismissiveness, control
41
+
42
+ ## Performance
43
+
44
+ - **Accuracy**: 100%
45
+ - **F1 Score**: 1.0
46
+ - **Training Data**: 170+ carefully curated examples
47
+ - **Architecture**: Fine-tuned DistilBERT
48
+
49
+ ## Intended Use
50
+
51
+ This model is designed for:
52
+ - Mental health and communication tools
53
+ - Educational applications about healthy relationships
54
+ - Content moderation for communication platforms
55
+ - Personal development and self-awareness tools
56
+
57
+ ## Usage
58
+
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
61
+ import torch
62
+
63
+ # Load model and tokenizer
64
+ tokenizer = AutoTokenizer.from_pretrained("SamanthaStorm/healthy-boundary-predictor")
65
+ model = AutoModelForSequenceClassification.from_pretrained("SamanthaStorm/healthy-boundary-predictor")
66
+
67
+ # Example prediction
68
+ text = "I need some time to think about this decision."
69
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
70
+
71
+ with torch.no_grad():
72
+ outputs = model(**inputs)
73
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
74
+
75
+ healthy_prob = predictions[0][1].item()
76
+ prediction = "healthy" if healthy_prob > 0.5 else "unhealthy"
77
+
78
+ print(f"Prediction: {prediction} (confidence: {healthy_prob:.3f})")
79
+ ```
80
+
81
+ ## Training Data
82
+
83
+ The model was trained on a diverse dataset including:
84
+ - Professional workplace scenarios
85
+ - Personal relationship communications
86
+ - Family dynamics
87
+ - Financial boundary situations
88
+ - Emotional boundary examples
89
+ - Nuanced examples with subtle manipulation patterns
90
+
91
+ ## Limitations
92
+
93
+ - This model is for educational and supportive purposes only
94
+ - Not a substitute for professional mental health advice
95
+ - Performance may vary on domains not seen during training
96
+ - Cultural and contextual nuances may affect accuracy
97
+
98
+ ## Ethical Considerations
99
+
100
+ - Designed to promote healthy communication patterns
101
+ - Should be used to support, not replace, human judgment
102
+ - Privacy and consent important when analyzing personal communications
103
+
104
+ ## Citation
105
+
106
+ If you use this model, please cite:
107
+
108
+ ```bibtex
109
+ @misc{healthy-boundary-predictor,
110
+ title={Healthy Boundary Predictor},
111
+ author={SamanthaStorm},
112
+ year={2025},
113
+ url={https://huggingface.co/SamanthaStorm/healthy-boundary-predictor}
114
+ }
115
+ ```
116
+
117
+ ## License
118
+
119
+ Apache 2.0
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertForSequenceClassification"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "hidden_dim": 3072,
10
+ "initializer_range": 0.02,
11
+ "max_position_embeddings": 512,
12
+ "model_type": "distilbert",
13
+ "n_heads": 12,
14
+ "n_layers": 6,
15
+ "pad_token_id": 0,
16
+ "problem_type": "single_label_classification",
17
+ "qa_dropout": 0.1,
18
+ "seq_classif_dropout": 0.2,
19
+ "sinusoidal_pos_embds": false,
20
+ "tie_weights_": true,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.53.0",
23
+ "vocab_size": 30522,
24
+ "id2label": {
25
+ "0": "unhealthy",
26
+ "1": "healthy"
27
+ },
28
+ "label2id": {
29
+ "unhealthy": 0,
30
+ "healthy": 1
31
+ }
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b680cb18720b823b142a26455c6e37b056ed363451fb33fb7b1070015b40c0d
3
+ size 267832560
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff