yjoonjang commited on
Commit
82afd76
·
verified ·
1 Parent(s): 20592cb

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,508 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:LambdaLoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ model-index:
20
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
21
+ results:
22
+ - task:
23
+ type: cross-encoder-reranking
24
+ name: Cross Encoder Reranking
25
+ dataset:
26
+ name: NanoMSMARCO R100
27
+ type: NanoMSMARCO_R100
28
+ metrics:
29
+ - type: map
30
+ value: 0.4842
31
+ name: Map
32
+ - type: mrr@10
33
+ value: 0.4716
34
+ name: Mrr@10
35
+ - type: ndcg@10
36
+ value: 0.5412
37
+ name: Ndcg@10
38
+ - task:
39
+ type: cross-encoder-reranking
40
+ name: Cross Encoder Reranking
41
+ dataset:
42
+ name: NanoNFCorpus R100
43
+ type: NanoNFCorpus_R100
44
+ metrics:
45
+ - type: map
46
+ value: 0.3309
47
+ name: Map
48
+ - type: mrr@10
49
+ value: 0.5813
50
+ name: Mrr@10
51
+ - type: ndcg@10
52
+ value: 0.3855
53
+ name: Ndcg@10
54
+ - task:
55
+ type: cross-encoder-reranking
56
+ name: Cross Encoder Reranking
57
+ dataset:
58
+ name: NanoNQ R100
59
+ type: NanoNQ_R100
60
+ metrics:
61
+ - type: map
62
+ value: 0.5926
63
+ name: Map
64
+ - type: mrr@10
65
+ value: 0.59
66
+ name: Mrr@10
67
+ - type: ndcg@10
68
+ value: 0.66
69
+ name: Ndcg@10
70
+ - task:
71
+ type: cross-encoder-nano-beir
72
+ name: Cross Encoder Nano BEIR
73
+ dataset:
74
+ name: NanoBEIR R100 mean
75
+ type: NanoBEIR_R100_mean
76
+ metrics:
77
+ - type: map
78
+ value: 0.4692
79
+ name: Map
80
+ - type: mrr@10
81
+ value: 0.5477
82
+ name: Mrr@10
83
+ - type: ndcg@10
84
+ value: 0.5289
85
+ name: Ndcg@10
86
+ ---
87
+
88
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
89
+
90
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
91
+
92
+ ## Model Details
93
+
94
+ ### Model Description
95
+ - **Model Type:** Cross Encoder
96
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
97
+ - **Maximum Sequence Length:** 512 tokens
98
+ - **Number of Output Labels:** 1 label
99
+ - **Training Dataset:**
100
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
101
+ - **Language:** en
102
+ <!-- - **License:** Unknown -->
103
+
104
+ ### Model Sources
105
+
106
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
107
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
108
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
109
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
110
+
111
+ ## Usage
112
+
113
+ ### Direct Usage (Sentence Transformers)
114
+
115
+ First install the Sentence Transformers library:
116
+
117
+ ```bash
118
+ pip install -U sentence-transformers
119
+ ```
120
+
121
+ Then you can load this model and run inference.
122
+ ```python
123
+ from sentence_transformers import CrossEncoder
124
+
125
+ # Download from the 🤗 Hub
126
+ model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
127
+ # Get scores for pairs of texts
128
+ pairs = [
129
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
130
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
131
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
132
+ ]
133
+ scores = model.predict(pairs)
134
+ print(scores.shape)
135
+ # (3,)
136
+
137
+ # Or rank different texts based on similarity to a single text
138
+ ranks = model.rank(
139
+ 'How many calories in an egg',
140
+ [
141
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
142
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
143
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
144
+ ]
145
+ )
146
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
147
+ ```
148
+
149
+ <!--
150
+ ### Direct Usage (Transformers)
151
+
152
+ <details><summary>Click to see the direct usage in Transformers</summary>
153
+
154
+ </details>
155
+ -->
156
+
157
+ <!--
158
+ ### Downstream Usage (Sentence Transformers)
159
+
160
+ You can finetune this model on your own dataset.
161
+
162
+ <details><summary>Click to expand</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Out-of-Scope Use
169
+
170
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
171
+ -->
172
+
173
+ ## Evaluation
174
+
175
+ ### Metrics
176
+
177
+ #### Cross Encoder Reranking
178
+
179
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
180
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
181
+ ```json
182
+ {
183
+ "at_k": 10,
184
+ "always_rerank_positives": true
185
+ }
186
+ ```
187
+
188
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
189
+ |:------------|:---------------------|:---------------------|:---------------------|
190
+ | map | 0.4842 (-0.0054) | 0.3309 (+0.0699) | 0.5926 (+0.1730) |
191
+ | mrr@10 | 0.4716 (-0.0059) | 0.5813 (+0.0815) | 0.5900 (+0.1633) |
192
+ | **ndcg@10** | **0.5412 (+0.0007)** | **0.3855 (+0.0605)** | **0.6600 (+0.1593)** |
193
+
194
+ #### Cross Encoder Nano BEIR
195
+
196
+ * Dataset: `NanoBEIR_R100_mean`
197
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
198
+ ```json
199
+ {
200
+ "dataset_names": [
201
+ "msmarco",
202
+ "nfcorpus",
203
+ "nq"
204
+ ],
205
+ "rerank_k": 100,
206
+ "at_k": 10,
207
+ "always_rerank_positives": true
208
+ }
209
+ ```
210
+
211
+ | Metric | Value |
212
+ |:------------|:---------------------|
213
+ | map | 0.4692 (+0.0792) |
214
+ | mrr@10 | 0.5477 (+0.0797) |
215
+ | **ndcg@10** | **0.5289 (+0.0735)** |
216
+
217
+ <!--
218
+ ## Bias, Risks and Limitations
219
+
220
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
221
+ -->
222
+
223
+ <!--
224
+ ### Recommendations
225
+
226
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
227
+ -->
228
+
229
+ ## Training Details
230
+
231
+ ### Training Dataset
232
+
233
+ #### ms_marco
234
+
235
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
236
+ * Size: 78,704 training samples
237
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
238
+ * Approximate statistics based on the first 1000 samples:
239
+ | | query | docs | labels |
240
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
241
+ | type | string | list | list |
242
+ | details | <ul><li>min: 11 characters</li><li>mean: 32.93 characters</li><li>max: 95 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
243
+ * Samples:
244
+ | query | docs | labels |
245
+ |:----------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
246
+ | <code>what does vegan mean</code> | <code>['A vegan, a person who practices veganism, is an individual who actively avoids the use of animal products for food, clothing or any other purpose. As with many diets and lifestyles, not all vegans approach animal product avoidance in the same ways. For example, some vegans completely avoid all animal by-products, while others consider it acceptable to use honey, silk, and other by-products produced from insects.', 'Fruitarian: Eats only raw fruit, including raw nuts and seeds. Vegan. Does not eat dairy products, eggs, or any other animal product. So in a nutshell, a vegetarian diet excludes flesh, but includes other animal products: A vegan diet is one that excludes all animal products. And I have to say that I have met very few vegans who stop with what they put in their mouths. ', 'Animal Ingredients and Their Alternatives. Adopting a vegan diet means saying “no” to cruelty to animals and environmental destruction and “yes” to compassion and good health. It also means paying attent...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
247
+ | <code>difference between viral and bacterial conjunctivitis symptoms</code> | <code>["Viral and bacterial conjunctivitis. Viral conjunctivitis and bacterial conjunctivitis may affect one or both eyes. Viral conjunctivitis usually produces a watery discharge. Bacterial conjunctivitis often produces a thicker, yellow-green discharge. Both types can be associated with colds or symptoms of a respiratory infection, such as a sore throat. Both viral and bacterial types are very contagious. They are spread through direct or indirect contact with the eye secretions of someone who's infected", 'A Honor Society of Nursing (STTI) answered. Viral and bacterial conjunctivitis are similar, but differ in several key ways. First, bacterial conjunctivitis can be cured with antibiotics, while the viral form cannot. Second, there is a slight variation in symptoms. With viral conjunctivitis, the discharge from the eye is clearer and less thick than with the bacterial infection. Viral conjunctivitis can also cause painful swelling in the lymph node nearest the ear, a symptom not experienc...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
248
+ | <code>can single member llc be taxed as s corp</code> | <code>['A single-member limited liability company, as a solely owned LLC is called, gives the owner a choice of how to be taxed -- as a sole proprietorship, an S corporation or a C corporation. The legal structure of the business itself doesn’t change with any of the choices. Under an S corporation classification, a single-member LLC needs to have a large enough profit in excess of the owner’s salary to realize any tax savings on passive income.', 'An S corp may own up to 100 percent of an LLC, or limited liability company. While all but single-member LLCs cannot be shareholders in S corporations, the reverse -- an S corporation owning an LLC -- is legal. The similarity of tax treatment for S corps and LLCs eliminates most of the common concerns about IRS issues. There is, however, one way for an LLC to own stock in an S corp. A single member LLC, taxed as a sole proprietorship, is called a disregarded entity by the IRS. Treated like an unincorporated individual, this LLC could own stock in ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
249
+ * Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
250
+ ```json
251
+ {
252
+ "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme",
253
+ "k": null,
254
+ "sigma": 1.0,
255
+ "eps": 1e-10,
256
+ "reduction_log": "binary",
257
+ "activation_fct": "torch.nn.modules.linear.Identity",
258
+ "mini_batch_size": 16
259
+ }
260
+ ```
261
+
262
+ ### Evaluation Dataset
263
+
264
+ #### ms_marco
265
+
266
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
267
+ * Size: 1,000 evaluation samples
268
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
269
+ * Approximate statistics based on the first 1000 samples:
270
+ | | query | docs | labels |
271
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
272
+ | type | string | list | list |
273
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.63 characters</li><li>max: 99 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
274
+ * Samples:
275
+ | query | docs | labels |
276
+ |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
277
+ | <code>define monogenic trait</code> | <code>['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same *gene*, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
278
+ | <code>behavioral theory definition</code> | <code>["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
279
+ | <code>What is a disease that is pleiotropic?</code> | <code>['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
280
+ * Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
281
+ ```json
282
+ {
283
+ "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme",
284
+ "k": null,
285
+ "sigma": 1.0,
286
+ "eps": 1e-10,
287
+ "reduction_log": "binary",
288
+ "activation_fct": "torch.nn.modules.linear.Identity",
289
+ "mini_batch_size": 16
290
+ }
291
+ ```
292
+
293
+ ### Training Hyperparameters
294
+ #### Non-Default Hyperparameters
295
+
296
+ - `eval_strategy`: steps
297
+ - `per_device_train_batch_size`: 16
298
+ - `per_device_eval_batch_size`: 16
299
+ - `learning_rate`: 2e-05
300
+ - `num_train_epochs`: 1
301
+ - `warmup_ratio`: 0.1
302
+ - `seed`: 12
303
+ - `bf16`: True
304
+ - `load_best_model_at_end`: True
305
+
306
+ #### All Hyperparameters
307
+ <details><summary>Click to expand</summary>
308
+
309
+ - `overwrite_output_dir`: False
310
+ - `do_predict`: False
311
+ - `eval_strategy`: steps
312
+ - `prediction_loss_only`: True
313
+ - `per_device_train_batch_size`: 16
314
+ - `per_device_eval_batch_size`: 16
315
+ - `per_gpu_train_batch_size`: None
316
+ - `per_gpu_eval_batch_size`: None
317
+ - `gradient_accumulation_steps`: 1
318
+ - `eval_accumulation_steps`: None
319
+ - `torch_empty_cache_steps`: None
320
+ - `learning_rate`: 2e-05
321
+ - `weight_decay`: 0.0
322
+ - `adam_beta1`: 0.9
323
+ - `adam_beta2`: 0.999
324
+ - `adam_epsilon`: 1e-08
325
+ - `max_grad_norm`: 1.0
326
+ - `num_train_epochs`: 1
327
+ - `max_steps`: -1
328
+ - `lr_scheduler_type`: linear
329
+ - `lr_scheduler_kwargs`: {}
330
+ - `warmup_ratio`: 0.1
331
+ - `warmup_steps`: 0
332
+ - `log_level`: passive
333
+ - `log_level_replica`: warning
334
+ - `log_on_each_node`: True
335
+ - `logging_nan_inf_filter`: True
336
+ - `save_safetensors`: True
337
+ - `save_on_each_node`: False
338
+ - `save_only_model`: False
339
+ - `restore_callback_states_from_checkpoint`: False
340
+ - `no_cuda`: False
341
+ - `use_cpu`: False
342
+ - `use_mps_device`: False
343
+ - `seed`: 12
344
+ - `data_seed`: None
345
+ - `jit_mode_eval`: False
346
+ - `use_ipex`: False
347
+ - `bf16`: True
348
+ - `fp16`: False
349
+ - `fp16_opt_level`: O1
350
+ - `half_precision_backend`: auto
351
+ - `bf16_full_eval`: False
352
+ - `fp16_full_eval`: False
353
+ - `tf32`: None
354
+ - `local_rank`: 0
355
+ - `ddp_backend`: None
356
+ - `tpu_num_cores`: None
357
+ - `tpu_metrics_debug`: False
358
+ - `debug`: []
359
+ - `dataloader_drop_last`: False
360
+ - `dataloader_num_workers`: 0
361
+ - `dataloader_prefetch_factor`: None
362
+ - `past_index`: -1
363
+ - `disable_tqdm`: False
364
+ - `remove_unused_columns`: True
365
+ - `label_names`: None
366
+ - `load_best_model_at_end`: True
367
+ - `ignore_data_skip`: False
368
+ - `fsdp`: []
369
+ - `fsdp_min_num_params`: 0
370
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
371
+ - `fsdp_transformer_layer_cls_to_wrap`: None
372
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
373
+ - `deepspeed`: None
374
+ - `label_smoothing_factor`: 0.0
375
+ - `optim`: adamw_torch
376
+ - `optim_args`: None
377
+ - `adafactor`: False
378
+ - `group_by_length`: False
379
+ - `length_column_name`: length
380
+ - `ddp_find_unused_parameters`: None
381
+ - `ddp_bucket_cap_mb`: None
382
+ - `ddp_broadcast_buffers`: False
383
+ - `dataloader_pin_memory`: True
384
+ - `dataloader_persistent_workers`: False
385
+ - `skip_memory_metrics`: True
386
+ - `use_legacy_prediction_loop`: False
387
+ - `push_to_hub`: False
388
+ - `resume_from_checkpoint`: None
389
+ - `hub_model_id`: None
390
+ - `hub_strategy`: every_save
391
+ - `hub_private_repo`: None
392
+ - `hub_always_push`: False
393
+ - `gradient_checkpointing`: False
394
+ - `gradient_checkpointing_kwargs`: None
395
+ - `include_inputs_for_metrics`: False
396
+ - `include_for_metrics`: []
397
+ - `eval_do_concat_batches`: True
398
+ - `fp16_backend`: auto
399
+ - `push_to_hub_model_id`: None
400
+ - `push_to_hub_organization`: None
401
+ - `mp_parameters`:
402
+ - `auto_find_batch_size`: False
403
+ - `full_determinism`: False
404
+ - `torchdynamo`: None
405
+ - `ray_scope`: last
406
+ - `ddp_timeout`: 1800
407
+ - `torch_compile`: False
408
+ - `torch_compile_backend`: None
409
+ - `torch_compile_mode`: None
410
+ - `dispatch_batches`: None
411
+ - `split_batches`: None
412
+ - `include_tokens_per_second`: False
413
+ - `include_num_input_tokens_seen`: False
414
+ - `neftune_noise_alpha`: None
415
+ - `optim_target_modules`: None
416
+ - `batch_eval_metrics`: False
417
+ - `eval_on_start`: False
418
+ - `use_liger_kernel`: False
419
+ - `eval_use_gather_object`: False
420
+ - `average_tokens_across_devices`: False
421
+ - `prompts`: None
422
+ - `batch_sampler`: batch_sampler
423
+ - `multi_dataset_batch_sampler`: proportional
424
+
425
+ </details>
426
+
427
+ ### Training Logs
428
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
429
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
430
+ | -1 | -1 | - | - | 0.0406 (-0.4998) | 0.2212 (-0.1039) | 0.0519 (-0.4487) | 0.1046 (-0.3508) |
431
+ | 0.0002 | 1 | 1.4961 | - | - | - | - | - |
432
+ | 0.0508 | 250 | 1.5252 | - | - | - | - | - |
433
+ | 0.1016 | 500 | 1.4056 | 1.3193 | 0.3963 (-0.1442) | 0.2660 (-0.0591) | 0.4843 (-0.0163) | 0.3822 (-0.0732) |
434
+ | 0.1525 | 750 | 1.317 | - | - | - | - | - |
435
+ | 0.2033 | 1000 | 1.2607 | 1.1964 | 0.4744 (-0.0660) | 0.3642 (+0.0392) | 0.6067 (+0.1060) | 0.4817 (+0.0264) |
436
+ | 0.2541 | 1250 | 1.2321 | - | - | - | - | - |
437
+ | 0.3049 | 1500 | 1.2179 | 1.1596 | 0.5617 (+0.0213) | 0.3948 (+0.0697) | 0.6214 (+0.1207) | 0.5260 (+0.0706) |
438
+ | 0.3558 | 1750 | 1.214 | - | - | - | - | - |
439
+ | 0.4066 | 2000 | 1.1889 | 1.1559 | 0.5516 (+0.0112) | 0.3769 (+0.0519) | 0.6057 (+0.1051) | 0.5114 (+0.0560) |
440
+ | 0.4574 | 2250 | 1.1842 | - | - | - | - | - |
441
+ | 0.5082 | 2500 | 1.1895 | 1.1433 | 0.5389 (-0.0015) | 0.4017 (+0.0767) | 0.5733 (+0.0726) | 0.5046 (+0.0493) |
442
+ | 0.5591 | 2750 | 1.1814 | - | - | - | - | - |
443
+ | **0.6099** | **3000** | **1.1748** | **1.1388** | **0.5412 (+0.0007)** | **0.3855 (+0.0605)** | **0.6600 (+0.1593)** | **0.5289 (+0.0735)** |
444
+ | 0.6607 | 3250 | 1.177 | - | - | - | - | - |
445
+ | 0.7115 | 3500 | 1.1879 | 1.1318 | 0.5499 (+0.0095) | 0.3659 (+0.0408) | 0.6129 (+0.1122) | 0.5095 (+0.0542) |
446
+ | 0.7624 | 3750 | 1.1792 | - | - | - | - | - |
447
+ | 0.8132 | 4000 | 1.1708 | 1.1225 | 0.5543 (+0.0138) | 0.3728 (+0.0477) | 0.6207 (+0.1201) | 0.5159 (+0.0605) |
448
+ | 0.8640 | 4250 | 1.1415 | - | - | - | - | - |
449
+ | 0.9148 | 4500 | 1.1573 | 1.1192 | 0.5582 (+0.0178) | 0.3843 (+0.0593) | 0.6293 (+0.1286) | 0.5239 (+0.0686) |
450
+ | 0.9656 | 4750 | 1.1712 | - | - | - | - | - |
451
+ | -1 | -1 | - | - | 0.5412 (+0.0007) | 0.3855 (+0.0605) | 0.6600 (+0.1593) | 0.5289 (+0.0735) |
452
+
453
+ * The bold row denotes the saved checkpoint.
454
+
455
+ ### Framework Versions
456
+ - Python: 3.11.11
457
+ - Sentence Transformers: 3.5.0.dev0
458
+ - Transformers: 4.49.0
459
+ - PyTorch: 2.6.0+cu124
460
+ - Accelerate: 1.5.2
461
+ - Datasets: 3.4.0
462
+ - Tokenizers: 0.21.1
463
+
464
+ ## Citation
465
+
466
+ ### BibTeX
467
+
468
+ #### Sentence Transformers
469
+ ```bibtex
470
+ @inproceedings{reimers-2019-sentence-bert,
471
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
472
+ author = "Reimers, Nils and Gurevych, Iryna",
473
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
474
+ month = "11",
475
+ year = "2019",
476
+ publisher = "Association for Computational Linguistics",
477
+ url = "https://arxiv.org/abs/1908.10084",
478
+ }
479
+ ```
480
+
481
+ #### LambdaLoss
482
+ ```bibtex
483
+ @inproceedings{wang2018lambdaloss,
484
+ title={The lambdaloss framework for ranking metric optimization},
485
+ author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
486
+ booktitle={Proceedings of the 27th ACM international conference on information and knowledge management},
487
+ pages={1313--1322},
488
+ year={2018}
489
+ }
490
+ ```
491
+
492
+ <!--
493
+ ## Glossary
494
+
495
+ *Clearly define terms in order to be accessible across audiences.*
496
+ -->
497
+
498
+ <!--
499
+ ## Model Card Authors
500
+
501
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
502
+ -->
503
+
504
+ <!--
505
+ ## Model Card Contact
506
+
507
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
508
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cd6630d4df24c7dac0d40b0d7e18f83b8ee57e011657ccc82dea8815eef9c5a
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff