reecursion commited on
Commit
bf558f4
·
verified ·
1 Parent(s): f9bf6d3

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ widget:
11
+ - text: He is Male, his heart rate is 148, he walks 10000 steps daily, and is Normal.
12
+ He slept at 1 hrs. Yesterday, he slept from 2.0hrs to 3.0 hrs, with a duration
13
+ of 90.0 minutes and 0 interruptions. The day before yesterday, he slept from 22.0
14
+ hrs to 6.0 hrs, with a duration of 485.0 minutes and 0 interruptions.
15
+ - text: She is Female, her heart rate is 68, she walks 11000 steps daily and is Normal.
16
+ She slept at 1 hrs. Yesterday, she slept from 1.0 hrs to 9.0 hrs, with a duration
17
+ of 495.0 minutes and 0 interruptions. The day before yesterday, she slept from
18
+ 1.0 hrs to 10.0 hrs, with a duration of 540.0 minutes and 0 interruptions.
19
+ - text: He is Male, his heart rate is 70, he walks 8500 steps daily, and is Normal.
20
+ He slept at 23 hrs. Yesterday, he slept from 23.0hrs to 8.0 hrs, with a duration
21
+ of 350.0 minutes and 3 interruptions. The day before yesterday, he slept from
22
+ 22.0 hrs to 6.0 hrs, with a duration of 390.0 minutes and 1 interruptions.
23
+ - text: He is Male, his heart rate is 93, he walks 9800 steps daily, and is Normal.
24
+ He slept at 0 hrs. Yesterday, he slept from 23.0hrs to 7.0 hrs, with a duration
25
+ of 460.0 minutes and 0 interruptions. The day before yesterday, he slept from
26
+ 23.0 hrs to 7.0 hrs, with a duration of 425.0 minutes and 1 interruptions.
27
+ - text: He is Male, his heart rate is 75, he walks 11000 steps daily, and is Normal.
28
+ He slept at 2 hrs. Yesterday, he slept from 3.0hrs to 7.0 hrs, with a duration
29
+ of 400.0 minutes and 2 interruptions. The day before yesterday, he slept from
30
+ 1.0 hrs to 8.0 hrs, with a duration of 450.0 minutes and 3 interruptions.
31
+ pipeline_tag: text-classification
32
+ inference: true
33
+ base_model: sentence-transformers/paraphrase-MiniLM-L3-v2
34
+ model-index:
35
+ - name: SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
36
+ results:
37
+ - task:
38
+ type: text-classification
39
+ name: Text Classification
40
+ dataset:
41
+ name: Unknown
42
+ type: unknown
43
+ split: test
44
+ metrics:
45
+ - type: accuracy
46
+ value: 0.8666666666666667
47
+ name: Accuracy
48
+ ---
49
+
50
+ # SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
51
+
52
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
53
+
54
+ The model has been trained using an efficient few-shot learning technique that involves:
55
+
56
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
57
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
58
+
59
+ ## Model Details
60
+
61
+ ### Model Description
62
+ - **Model Type:** SetFit
63
+ - **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
64
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
65
+ - **Maximum Sequence Length:** 128 tokens
66
+ - **Number of Classes:** 3 classes
67
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
68
+ <!-- - **Language:** Unknown -->
69
+ <!-- - **License:** Unknown -->
70
+
71
+ ### Model Sources
72
+
73
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
74
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
75
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
76
+
77
+ ### Model Labels
78
+ | Label | Examples |
79
+ |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
80
+ | 1 | <ul><li>'He is Male, his heart rate is 64, he walks 10000 steps daily, and is Normal. He slept at 11 hrs. Yesterday, he slept from 22.0hrs to 11.0 hrs, with a duration of 765.0 minutes and 2 interruptions. The day before yesterday, he slept from 23.0 hrs to 8.0 hrs, with a duration of 527.0 minutes and 4 interruptions.'</li><li>'She is Female, her heart rate is 89, she walks 3873 steps daily and is Overweight. She slept at 10 hrs. Yesterday, she slept from 4.0 hrs to 6.0 hrs, with a duration of 120.0 minutes and 1 interruptions. The day before yesterday, she slept from 4.0 hrs to 9.0 hrs, with a duration of 300.0 minutes and 2 interruptions.'</li><li>'She is Female, her heart rate is 68, she walks 11000 steps daily and is Normal. She slept at 10 hrs. Yesterday, she slept from 1.0 hrs to 9.0 hrs, with a duration of 495.0 minutes and 0 interruptions. The day before yesterday, she slept from 1.0 hrs to 10.0 hrs, with a duration of 540.0 minutes and 1 interruptions.'</li></ul> |
81
+ | 2 | <ul><li>'She is Female, her heart rate is 66, she walks 2413 steps daily and is Underweight. She slept at 8 hrs. Yesterday, she slept from 23.0 hrs to 7.0 hrs, with a duration of 472.0 minutes and 5 interruptions. The day before yesterday, she slept from 23.0 hrs to 5.0 hrs, with a duration of 344.0 minutes and 6 interruptions.'</li><li>'He is Male, his heart rate is 95, he walks 9000 steps daily, and is Normal. He slept at 10 hrs. Yesterday, he slept from 4.0hrs to 9.0 hrs, with a duration of 323.0 minutes and 5 interruptions. The day before yesterday, he slept from 2.0 hrs to 10.0 hrs, with a duration of 501.0 minutes and 6 interruptions.'</li></ul> |
82
+ | 0 | <ul><li>'She is Female, her heart rate is 100, she walks 8000 steps daily and is Normal. She slept at 7 hrs. Yesterday, she slept from 2.0 hrs to 7.0 hrs, with a duration of 323.0 minutes and 0 interruptions. The day before yesterday, she slept from 0.0 hrs to 6.0 hrs, with a duration of 395.0 minutes and 2 interruptions.'</li><li>'He is Male, his heart rate is 93, he walks 9800 steps daily, and is Normal. He slept at 9 hrs. Yesterday, he slept from 23.0hrs to 7.0 hrs, with a duration of 460.0 minutes and 0 interruptions. The day before yesterday, he slept from 23.0 hrs to 7.0 hrs, with a duration of 425.0 minutes and 1 interruptions.'</li></ul> |
83
+
84
+ ## Evaluation
85
+
86
+ ### Metrics
87
+ | Label | Accuracy |
88
+ |:--------|:---------|
89
+ | **all** | 0.8667 |
90
+
91
+ ## Uses
92
+
93
+ ### Direct Use for Inference
94
+
95
+ First install the SetFit library:
96
+
97
+ ```bash
98
+ pip install setfit
99
+ ```
100
+
101
+ Then you can load this model and run inference.
102
+
103
+ ```python
104
+ from setfit import SetFitModel
105
+
106
+ # Download from the 🤗 Hub
107
+ model = SetFitModel.from_pretrained("reecursion/few-shot-stress-detection-miniLM")
108
+ # Run inference
109
+ preds = model("He is Male, his heart rate is 75, he walks 11000 steps daily, and is Normal. He slept at 2 hrs. Yesterday, he slept from 3.0hrs to 7.0 hrs, with a duration of 400.0 minutes and 2 interruptions. The day before yesterday, he slept from 1.0 hrs to 8.0 hrs, with a duration of 450.0 minutes and 3 interruptions.")
110
+ ```
111
+
112
+ <!--
113
+ ### Downstream Use
114
+
115
+ *List how someone could finetune this model on their own dataset.*
116
+ -->
117
+
118
+ <!--
119
+ ### Out-of-Scope Use
120
+
121
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
122
+ -->
123
+
124
+ <!--
125
+ ## Bias, Risks and Limitations
126
+
127
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
128
+ -->
129
+
130
+ <!--
131
+ ### Recommendations
132
+
133
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
134
+ -->
135
+
136
+ ## Training Details
137
+
138
+ ### Training Set Metrics
139
+ | Training set | Min | Median | Max |
140
+ |:-------------|:----|:-------|:----|
141
+ | Word count | 59 | 59.5 | 60 |
142
+
143
+ | Label | Training Sample Count |
144
+ |:------|:----------------------|
145
+ | 0 | 2 |
146
+ | 1 | 6 |
147
+ | 2 | 2 |
148
+
149
+ ### Training Hyperparameters
150
+ - batch_size: (16, 16)
151
+ - num_epochs: (1, 1)
152
+ - max_steps: -1
153
+ - sampling_strategy: oversampling
154
+ - num_iterations: 15
155
+ - body_learning_rate: (2e-05, 2e-05)
156
+ - head_learning_rate: 2e-05
157
+ - loss: CosineSimilarityLoss
158
+ - distance_metric: cosine_distance
159
+ - margin: 0.25
160
+ - end_to_end: False
161
+ - use_amp: False
162
+ - warmup_proportion: 0.1
163
+ - seed: 42
164
+ - eval_max_steps: -1
165
+ - load_best_model_at_end: False
166
+
167
+ ### Training Results
168
+ | Epoch | Step | Training Loss | Validation Loss |
169
+ |:------:|:----:|:-------------:|:---------------:|
170
+ | 0.0526 | 1 | 0.3562 | - |
171
+
172
+ ### Framework Versions
173
+ - Python: 3.10.12
174
+ - SetFit: 1.0.3
175
+ - Sentence Transformers: 2.6.1
176
+ - Transformers: 4.38.2
177
+ - PyTorch: 2.2.1+cu121
178
+ - Datasets: 2.18.0
179
+ - Tokenizers: 0.15.2
180
+
181
+ ## Citation
182
+
183
+ ### BibTeX
184
+ ```bibtex
185
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
186
+ doi = {10.48550/ARXIV.2209.11055},
187
+ url = {https://arxiv.org/abs/2209.11055},
188
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
189
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
190
+ title = {Efficient Few-Shot Learning Without Prompts},
191
+ publisher = {arXiv},
192
+ year = {2022},
193
+ copyright = {Creative Commons Attribution 4.0 International}
194
+ }
195
+ ```
196
+
197
+ <!--
198
+ ## Glossary
199
+
200
+ *Clearly define terms in order to be accessible across audiences.*
201
+ -->
202
+
203
+ <!--
204
+ ## Model Card Authors
205
+
206
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
207
+ -->
208
+
209
+ <!--
210
+ ## Model Card Contact
211
+
212
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
213
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/paraphrase-MiniLM-L3-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 3,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.38.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.7.0",
5
+ "pytorch": "1.9.0+cu102"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null
9
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "labels": null,
3
+ "normalize_embeddings": false
4
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cb8a028e05f5375d86498c46747c328b1b31ebb6920224ee15517b228f9a901
3
+ size 69565312
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db128af4e95ca71c36e2b1754d525f9ded9b05d924bd25fa0fce91831b7fc890
3
+ size 10111
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 128,
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff