justArmenian commited on
Commit
05a3140
·
verified ·
1 Parent(s): 9f15b31

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,537 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/paraphrase-MiniLM-L3-v2
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ metrics:
7
+ - cosine_accuracy
8
+ - dot_accuracy
9
+ - manhattan_accuracy
10
+ - euclidean_accuracy
11
+ - max_accuracy
12
+ pipeline_tag: sentence-similarity
13
+ tags:
14
+ - sentence-transformers
15
+ - sentence-similarity
16
+ - feature-extraction
17
+ - generated_from_trainer
18
+ - dataset_size:2000
19
+ - loss:MultipleNegativesRankingLoss
20
+ widget:
21
+ - source_sentence: However, this Court will determine that there was sufficient evidence
22
+ to sustain the jury's verdict if the evidence was "of such quality and weight
23
+ that, having in mind the beyond a reasonable doubt burden of proof standard, reasonable
24
+ fair-minded men in the exercise of impartial judgment might reach different conclusions
25
+ on every element of the offense."
26
+ sentences:
27
+ - This Court will determine if there was enough evidence to support the jury's verdict
28
+ by considering whether reasonable people could have reached different conclusions
29
+ based on the evidence presented.
30
+ - The VA psychiatrist believed that the Veteran was likely to have PTSD as a direct
31
+ result of the attack on him during his military service in Korea.
32
+ - The Veteran started seeing a mental health specialist at the VA on a regular basis.
33
+ - source_sentence: Under such circumstances, VA is required to prove by clear and
34
+ unmistakable evidence that a disease or injury manifesting in service both preexisted
35
+ service and was not aggravated by service.
36
+ sentences:
37
+ - The independent mental health expert offered a comprehensive account of the Veteran's
38
+ mental health issues, service-related impairments, and previous psychiatric and
39
+ medical treatment experiences.
40
+ - At the trial, the prosecution failed to provide a search warrant, which was not
41
+ explained or justified.
42
+ - In order to establish that a disease or injury did not arise from service, VA
43
+ must provide clear and convincing evidence that the condition existed prior to
44
+ military service and was not exacerbated by service.
45
+ - source_sentence: Evidence of behavior changes following the claimed assault is one
46
+ type of relevant evidence that may be found in these sources.
47
+ sentences:
48
+ - The independent medical clinician comprehensively documented the impact of the
49
+ Veteran's alleged condition on their functional abilities.
50
+ - A range of behavioral indicators, including alterations in demeanor, speech patterns,
51
+ and physical reactions, can serve as valuable evidence in support of allegations
52
+ of assault.
53
+ - He claims that his mental health issues, which have been diagnosed as various
54
+ psychiatric disorders, are a result of the trauma he experienced during his deployment
55
+ to a combat zone in Vietnam while stationed in Japan in 1974.
56
+ - source_sentence: The court held Apple had not made the requisite showing of likelihood
57
+ of success on the merits because it “concluded that there is some doubt as to
58
+ the copyrightability of the programs described in this litigation.”
59
+ sentences:
60
+ - The trial court committed a series of errors in this case, including failing to
61
+ instruct the jury on an essential element of felonious damage to computers, denying
62
+ the defendant's motion to dismiss, and entering judgment on a fatally flawed indictment.
63
+ - The court determined that Apple had not provided sufficient evidence to demonstrate
64
+ a likelihood of success on the merits, as it had "raised some doubts about the
65
+ copyrightability of the programs in question."
66
+ - The Veteran believes that she should be granted service connection for chronic
67
+ PTSD or other psychiatric disorder because she has been diagnosed with chronic
68
+ PTSD as a result of several stressful events that occurred during her periods
69
+ of active duty and active duty for training with the Army National Guard.
70
+ - source_sentence: In contrast, the scope of punishable conduct under the instant
71
+ statute is limited by the individual's specified intent to "haras[s]" by communicating
72
+ a "threat" so as to "engage in a knowing and willful course of conduct" directed
73
+ at the victim that "alarms, torments, or terrorizes" the victim.
74
+ sentences:
75
+ - The scope of punishable conduct under the statute is limited to the individual's
76
+ intent to harass by communicating a threat so as to engage in a knowing and willful
77
+ course of conduct directed at the victim that alarms, torments, or terrorizes
78
+ the victim.
79
+ - The Veteran has been diagnosed with both major depressive disorder and PTSD.
80
+ - The trial court's decision on an anti-SLAPP motion is subject to de novo review.
81
+ model-index:
82
+ - name: SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L3-v2
83
+ results:
84
+ - task:
85
+ type: triplet
86
+ name: Triplet
87
+ dataset:
88
+ name: all nli dev
89
+ type: all-nli-dev
90
+ metrics:
91
+ - type: cosine_accuracy
92
+ value: 1.0
93
+ name: Cosine Accuracy
94
+ - type: dot_accuracy
95
+ value: 0.0
96
+ name: Dot Accuracy
97
+ - type: manhattan_accuracy
98
+ value: 1.0
99
+ name: Manhattan Accuracy
100
+ - type: euclidean_accuracy
101
+ value: 1.0
102
+ name: Euclidean Accuracy
103
+ - type: max_accuracy
104
+ value: 1.0
105
+ name: Max Accuracy
106
+ - task:
107
+ type: triplet
108
+ name: Triplet
109
+ dataset:
110
+ name: all nli test
111
+ type: all-nli-test
112
+ metrics:
113
+ - type: cosine_accuracy
114
+ value: 1.0
115
+ name: Cosine Accuracy
116
+ - type: dot_accuracy
117
+ value: 0.0
118
+ name: Dot Accuracy
119
+ - type: manhattan_accuracy
120
+ value: 1.0
121
+ name: Manhattan Accuracy
122
+ - type: euclidean_accuracy
123
+ value: 1.0
124
+ name: Euclidean Accuracy
125
+ - type: max_accuracy
126
+ value: 1.0
127
+ name: Max Accuracy
128
+ - type: cosine_accuracy
129
+ value: 1.0
130
+ name: Cosine Accuracy
131
+ - type: dot_accuracy
132
+ value: 0.0
133
+ name: Dot Accuracy
134
+ - type: manhattan_accuracy
135
+ value: 1.0
136
+ name: Manhattan Accuracy
137
+ - type: euclidean_accuracy
138
+ value: 1.0
139
+ name: Euclidean Accuracy
140
+ - type: max_accuracy
141
+ value: 1.0
142
+ name: Max Accuracy
143
+ ---
144
+
145
+ # SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L3-v2
146
+
147
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
148
+
149
+ ## Model Details
150
+
151
+ ### Model Description
152
+ - **Model Type:** Sentence Transformer
153
+ - **Base model:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) <!-- at revision 54825a6a5a83f5d98d318ba2a11bfd31eb906f06 -->
154
+ - **Maximum Sequence Length:** 128 tokens
155
+ - **Output Dimensionality:** 384 tokens
156
+ - **Similarity Function:** Cosine Similarity
157
+ <!-- - **Training Dataset:** Unknown -->
158
+ <!-- - **Language:** Unknown -->
159
+ <!-- - **License:** Unknown -->
160
+
161
+ ### Model Sources
162
+
163
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
164
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
165
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
166
+
167
+ ### Full Model Architecture
168
+
169
+ ```
170
+ SentenceTransformer(
171
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
172
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
173
+ )
174
+ ```
175
+
176
+ ## Usage
177
+
178
+ ### Direct Usage (Sentence Transformers)
179
+
180
+ First install the Sentence Transformers library:
181
+
182
+ ```bash
183
+ pip install -U sentence-transformers
184
+ ```
185
+
186
+ Then you can load this model and run inference.
187
+ ```python
188
+ from sentence_transformers import SentenceTransformer
189
+
190
+ # Download from the 🤗 Hub
191
+ model = SentenceTransformer("justArmenian/legal_paraphrase")
192
+ # Run inference
193
+ sentences = [
194
+ 'In contrast, the scope of punishable conduct under the instant statute is limited by the individual\'s specified intent to "haras[s]" by communicating a "threat" so as to "engage in a knowing and willful course of conduct" directed at the victim that "alarms, torments, or terrorizes" the victim.',
195
+ "The scope of punishable conduct under the statute is limited to the individual's intent to harass by communicating a threat so as to engage in a knowing and willful course of conduct directed at the victim that alarms, torments, or terrorizes the victim.",
196
+ 'The Veteran has been diagnosed with both major depressive disorder and PTSD.',
197
+ ]
198
+ embeddings = model.encode(sentences)
199
+ print(embeddings.shape)
200
+ # [3, 384]
201
+
202
+ # Get the similarity scores for the embeddings
203
+ similarities = model.similarity(embeddings, embeddings)
204
+ print(similarities.shape)
205
+ # [3, 3]
206
+ ```
207
+
208
+ <!--
209
+ ### Direct Usage (Transformers)
210
+
211
+ <details><summary>Click to see the direct usage in Transformers</summary>
212
+
213
+ </details>
214
+ -->
215
+
216
+ <!--
217
+ ### Downstream Usage (Sentence Transformers)
218
+
219
+ You can finetune this model on your own dataset.
220
+
221
+ <details><summary>Click to expand</summary>
222
+
223
+ </details>
224
+ -->
225
+
226
+ <!--
227
+ ### Out-of-Scope Use
228
+
229
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
230
+ -->
231
+
232
+ ## Evaluation
233
+
234
+ ### Metrics
235
+
236
+ #### Triplet
237
+ * Dataset: `all-nli-dev`
238
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
239
+
240
+ | Metric | Value |
241
+ |:-------------------|:--------|
242
+ | cosine_accuracy | 1.0 |
243
+ | dot_accuracy | 0.0 |
244
+ | manhattan_accuracy | 1.0 |
245
+ | euclidean_accuracy | 1.0 |
246
+ | **max_accuracy** | **1.0** |
247
+
248
+ #### Triplet
249
+ * Dataset: `all-nli-test`
250
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
251
+
252
+ | Metric | Value |
253
+ |:-------------------|:--------|
254
+ | cosine_accuracy | 1.0 |
255
+ | dot_accuracy | 0.0 |
256
+ | manhattan_accuracy | 1.0 |
257
+ | euclidean_accuracy | 1.0 |
258
+ | **max_accuracy** | **1.0** |
259
+
260
+ #### Triplet
261
+ * Dataset: `all-nli-test`
262
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
263
+
264
+ | Metric | Value |
265
+ |:-------------------|:--------|
266
+ | cosine_accuracy | 1.0 |
267
+ | dot_accuracy | 0.0 |
268
+ | manhattan_accuracy | 1.0 |
269
+ | euclidean_accuracy | 1.0 |
270
+ | **max_accuracy** | **1.0** |
271
+
272
+ <!--
273
+ ## Bias, Risks and Limitations
274
+
275
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
276
+ -->
277
+
278
+ <!--
279
+ ### Recommendations
280
+
281
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
282
+ -->
283
+
284
+ ## Training Details
285
+
286
+ ### Training Dataset
287
+
288
+ #### Unnamed Dataset
289
+
290
+
291
+ * Size: 2,000 training samples
292
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
293
+ * Approximate statistics based on the first 1000 samples:
294
+ | | anchor | positive | negative |
295
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
296
+ | type | string | string | string |
297
+ | details | <ul><li>min: 8 tokens</li><li>mean: 36.01 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 31.41 tokens</li><li>max: 99 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 31.39 tokens</li><li>max: 99 tokens</li></ul> |
298
+ * Samples:
299
+ | anchor | positive | negative |
300
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
301
+ | <code>The weight of the competent and probative medical opinions of record is against finding that the Veteran has a current diagnosis of PTSD or any other chronic acquired psychiatric disorder which is related to her military service.</code> | <code>The weight of the credible and persuasive medical evidence on record suggests that the Veteran does not currently suffer from PTSD or any other chronic psychiatric condition related to her military service.</code> | <code>It is evident that an unauthorized physical intrusion would have been deemed a "search" under the Fourth Amendment when it was originally formulated.</code> |
302
+ | <code>We have no doubt that such a physical intrusion would have been considered a “search” within the meaning of the Fourth Amendment when it was adopted.</code> | <code>It is evident that an unauthorized physical intrusion would have been deemed a "search" under the Fourth Amendment when it was originally formulated.</code> | <code>In June 1972, the Veteran's condition was assessed by the Army Medical Board, which concluded that the Veteran's back condition made him unfit for active service, leading to his discharge from the military.</code> |
303
+ | <code>Later in June 1972, the Veteran's condition was evaluated by the Army Medical Board, where it was determined that the Veteran's back condition rendered him physically unfit for active service, and he was subsequently discharged from service.</code> | <code>In June 1972, the Veteran's condition was assessed by the Army Medical Board, which concluded that the Veteran's back condition made him unfit for active service, leading to his discharge from the military.</code> | <code>The court has granted a petition for a writ of certiorari to review a decision made by the Court of Appeal of California, Fourth Appellate District, Division One.</code> |
304
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
305
+ ```json
306
+ {
307
+ "scale": 20.0,
308
+ "similarity_fct": "cos_sim"
309
+ }
310
+ ```
311
+
312
+ ### Evaluation Dataset
313
+
314
+ #### Unnamed Dataset
315
+
316
+
317
+ * Size: 500 evaluation samples
318
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
319
+ * Approximate statistics based on the first 1000 samples:
320
+ | | anchor | positive | negative |
321
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
322
+ | type | string | string | string |
323
+ | details | <ul><li>min: 8 tokens</li><li>mean: 35.69 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 32.11 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 32.12 tokens</li><li>max: 77 tokens</li></ul> |
324
+ * Samples:
325
+ | anchor | positive | negative |
326
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
327
+ | <code>(Virginia v. Black, supra, 538 U.S. at p. 347.)</code> | <code>The Black Court asserted that the "vagueness doctrine is a safeguard against the arbitrary exercise of power by government officials."</code> | <code>This Court will determine if there was enough evidence to support the jury's verdict by considering whether reasonable people could have reached different conclusions based on the evidence presented.</code> |
328
+ | <code>However, this Court will determine that there was sufficient evidence to sustain the jury's verdict if the evidence was "of such quality and weight that, having in mind the beyond a reasonable doubt burden of proof standard, reasonable fair-minded men in the exercise of impartial judgment might reach different conclusions on every element of the offense."</code> | <code>This Court will determine if there was enough evidence to support the jury's verdict by considering whether reasonable people could have reached different conclusions based on the evidence presented.</code> | <code>The VA psychiatrist believed that the Veteran was likely to have PTSD as a direct result of the attack on him during his military service in Korea.</code> |
329
+ | <code>This VA psychiatrist opined that the Veteran had PTSD more likely than not to be the direct result of the attack on him during service in Korea.</code> | <code>The VA psychiatrist believed that the Veteran was likely to have PTSD as a direct result of the attack on him during his military service in Korea.</code> | <code>She noted that the Veteran's greatest source of stress was caring for their adult child without any assistance.</code> |
330
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
331
+ ```json
332
+ {
333
+ "scale": 20.0,
334
+ "similarity_fct": "cos_sim"
335
+ }
336
+ ```
337
+
338
+ ### Training Hyperparameters
339
+ #### Non-Default Hyperparameters
340
+
341
+ - `eval_strategy`: steps
342
+ - `per_device_train_batch_size`: 16
343
+ - `per_device_eval_batch_size`: 16
344
+ - `num_train_epochs`: 1
345
+ - `warmup_ratio`: 0.1
346
+ - `fp16`: True
347
+ - `batch_sampler`: no_duplicates
348
+
349
+ #### All Hyperparameters
350
+ <details><summary>Click to expand</summary>
351
+
352
+ - `overwrite_output_dir`: False
353
+ - `do_predict`: False
354
+ - `eval_strategy`: steps
355
+ - `prediction_loss_only`: True
356
+ - `per_device_train_batch_size`: 16
357
+ - `per_device_eval_batch_size`: 16
358
+ - `per_gpu_train_batch_size`: None
359
+ - `per_gpu_eval_batch_size`: None
360
+ - `gradient_accumulation_steps`: 1
361
+ - `eval_accumulation_steps`: None
362
+ - `learning_rate`: 5e-05
363
+ - `weight_decay`: 0.0
364
+ - `adam_beta1`: 0.9
365
+ - `adam_beta2`: 0.999
366
+ - `adam_epsilon`: 1e-08
367
+ - `max_grad_norm`: 1.0
368
+ - `num_train_epochs`: 1
369
+ - `max_steps`: -1
370
+ - `lr_scheduler_type`: linear
371
+ - `lr_scheduler_kwargs`: {}
372
+ - `warmup_ratio`: 0.1
373
+ - `warmup_steps`: 0
374
+ - `log_level`: passive
375
+ - `log_level_replica`: warning
376
+ - `log_on_each_node`: True
377
+ - `logging_nan_inf_filter`: True
378
+ - `save_safetensors`: True
379
+ - `save_on_each_node`: False
380
+ - `save_only_model`: False
381
+ - `restore_callback_states_from_checkpoint`: False
382
+ - `no_cuda`: False
383
+ - `use_cpu`: False
384
+ - `use_mps_device`: False
385
+ - `seed`: 42
386
+ - `data_seed`: None
387
+ - `jit_mode_eval`: False
388
+ - `use_ipex`: False
389
+ - `bf16`: False
390
+ - `fp16`: True
391
+ - `fp16_opt_level`: O1
392
+ - `half_precision_backend`: auto
393
+ - `bf16_full_eval`: False
394
+ - `fp16_full_eval`: False
395
+ - `tf32`: None
396
+ - `local_rank`: 0
397
+ - `ddp_backend`: None
398
+ - `tpu_num_cores`: None
399
+ - `tpu_metrics_debug`: False
400
+ - `debug`: []
401
+ - `dataloader_drop_last`: False
402
+ - `dataloader_num_workers`: 0
403
+ - `dataloader_prefetch_factor`: None
404
+ - `past_index`: -1
405
+ - `disable_tqdm`: False
406
+ - `remove_unused_columns`: True
407
+ - `label_names`: None
408
+ - `load_best_model_at_end`: False
409
+ - `ignore_data_skip`: False
410
+ - `fsdp`: []
411
+ - `fsdp_min_num_params`: 0
412
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
413
+ - `fsdp_transformer_layer_cls_to_wrap`: None
414
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
415
+ - `deepspeed`: None
416
+ - `label_smoothing_factor`: 0.0
417
+ - `optim`: adamw_torch
418
+ - `optim_args`: None
419
+ - `adafactor`: False
420
+ - `group_by_length`: False
421
+ - `length_column_name`: length
422
+ - `ddp_find_unused_parameters`: None
423
+ - `ddp_bucket_cap_mb`: None
424
+ - `ddp_broadcast_buffers`: False
425
+ - `dataloader_pin_memory`: True
426
+ - `dataloader_persistent_workers`: False
427
+ - `skip_memory_metrics`: True
428
+ - `use_legacy_prediction_loop`: False
429
+ - `push_to_hub`: False
430
+ - `resume_from_checkpoint`: None
431
+ - `hub_model_id`: None
432
+ - `hub_strategy`: every_save
433
+ - `hub_private_repo`: False
434
+ - `hub_always_push`: False
435
+ - `gradient_checkpointing`: False
436
+ - `gradient_checkpointing_kwargs`: None
437
+ - `include_inputs_for_metrics`: False
438
+ - `eval_do_concat_batches`: True
439
+ - `fp16_backend`: auto
440
+ - `push_to_hub_model_id`: None
441
+ - `push_to_hub_organization`: None
442
+ - `mp_parameters`:
443
+ - `auto_find_batch_size`: False
444
+ - `full_determinism`: False
445
+ - `torchdynamo`: None
446
+ - `ray_scope`: last
447
+ - `ddp_timeout`: 1800
448
+ - `torch_compile`: False
449
+ - `torch_compile_backend`: None
450
+ - `torch_compile_mode`: None
451
+ - `dispatch_batches`: None
452
+ - `split_batches`: None
453
+ - `include_tokens_per_second`: False
454
+ - `include_num_input_tokens_seen`: False
455
+ - `neftune_noise_alpha`: None
456
+ - `optim_target_modules`: None
457
+ - `batch_eval_metrics`: False
458
+ - `eval_on_start`: False
459
+ - `batch_sampler`: no_duplicates
460
+ - `multi_dataset_batch_sampler`: proportional
461
+
462
+ </details>
463
+
464
+ ### Training Logs
465
+ | Epoch | Step | Training Loss | loss | all-nli-dev_max_accuracy | all-nli-test_max_accuracy |
466
+ |:-----:|:----:|:-------------:|:------:|:------------------------:|:-------------------------:|
467
+ | 0 | 0 | - | - | 1.0 | - |
468
+ | 0.08 | 10 | 0.1402 | 0.0759 | 1.0 | - |
469
+ | 0.16 | 20 | 0.0873 | 0.0726 | 1.0 | - |
470
+ | 0.24 | 30 | 0.0992 | 0.0677 | 1.0 | - |
471
+ | 0.32 | 40 | 0.0759 | 0.0651 | 1.0 | - |
472
+ | 0.4 | 50 | 0.0355 | 0.0652 | 1.0 | - |
473
+ | 0.48 | 60 | 0.0814 | 0.0666 | 1.0 | - |
474
+ | 0.56 | 70 | 0.0353 | 0.0677 | 1.0 | - |
475
+ | 0.64 | 80 | 0.1404 | 0.0677 | 1.0 | - |
476
+ | 0.72 | 90 | 0.0336 | 0.0664 | 1.0 | - |
477
+ | 0.8 | 100 | 0.0559 | 0.0661 | 1.0 | - |
478
+ | 0.88 | 110 | 0.0484 | 0.0654 | 1.0 | - |
479
+ | 0.96 | 120 | 0.0522 | 0.0650 | 1.0 | - |
480
+ | 1.0 | 125 | - | - | - | 1.0 |
481
+
482
+
483
+ ### Framework Versions
484
+ - Python: 3.10.12
485
+ - Sentence Transformers: 3.0.1
486
+ - Transformers: 4.42.4
487
+ - PyTorch: 2.3.1+cu121
488
+ - Accelerate: 0.32.1
489
+ - Datasets: 2.20.0
490
+ - Tokenizers: 0.19.1
491
+
492
+ ## Citation
493
+
494
+ ### BibTeX
495
+
496
+ #### Sentence Transformers
497
+ ```bibtex
498
+ @inproceedings{reimers-2019-sentence-bert,
499
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
500
+ author = "Reimers, Nils and Gurevych, Iryna",
501
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
502
+ month = "11",
503
+ year = "2019",
504
+ publisher = "Association for Computational Linguistics",
505
+ url = "https://arxiv.org/abs/1908.10084",
506
+ }
507
+ ```
508
+
509
+ #### MultipleNegativesRankingLoss
510
+ ```bibtex
511
+ @misc{henderson2017efficient,
512
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
513
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
514
+ year={2017},
515
+ eprint={1705.00652},
516
+ archivePrefix={arXiv},
517
+ primaryClass={cs.CL}
518
+ }
519
+ ```
520
+
521
+ <!--
522
+ ## Glossary
523
+
524
+ *Clearly define terms in order to be accessible across audiences.*
525
+ -->
526
+
527
+ <!--
528
+ ## Model Card Authors
529
+
530
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
531
+ -->
532
+
533
+ <!--
534
+ ## Model Card Contact
535
+
536
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
537
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/paraphrase-MiniLM-L3-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 3,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.42.4",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.4",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f344ac3cfe0362ecc958d11b8c1881b74da8de7b9f094e68eeff24b9aa15cabc
3
+ size 69565312
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 128,
50
+ "model_max_length": 128,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff