tomaarsen HF staff commited on
Commit
4b26fdc
·
verified ·
1 Parent(s): 7df634b

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,525 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:PListMLELoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ co2_eq_emissions:
20
+ emissions: 93.08788204215189
21
+ energy_consumed: 0.23948392867068316
22
+ source: codecarbon
23
+ training_type: fine-tuning
24
+ on_cloud: false
25
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
26
+ ram_total_size: 31.777088165283203
27
+ hours_used: 0.972
28
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
29
+ model-index:
30
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
31
+ results:
32
+ - task:
33
+ type: cross-encoder-reranking
34
+ name: Cross Encoder Reranking
35
+ dataset:
36
+ name: NanoMSMARCO R100
37
+ type: NanoMSMARCO_R100
38
+ metrics:
39
+ - type: map
40
+ value: 0.49
41
+ name: Map
42
+ - type: mrr@10
43
+ value: 0.4792
44
+ name: Mrr@10
45
+ - type: ndcg@10
46
+ value: 0.5526
47
+ name: Ndcg@10
48
+ - task:
49
+ type: cross-encoder-reranking
50
+ name: Cross Encoder Reranking
51
+ dataset:
52
+ name: NanoNFCorpus R100
53
+ type: NanoNFCorpus_R100
54
+ metrics:
55
+ - type: map
56
+ value: 0.3317
57
+ name: Map
58
+ - type: mrr@10
59
+ value: 0.5575
60
+ name: Mrr@10
61
+ - type: ndcg@10
62
+ value: 0.3642
63
+ name: Ndcg@10
64
+ - task:
65
+ type: cross-encoder-reranking
66
+ name: Cross Encoder Reranking
67
+ dataset:
68
+ name: NanoNQ R100
69
+ type: NanoNQ_R100
70
+ metrics:
71
+ - type: map
72
+ value: 0.5829
73
+ name: Map
74
+ - type: mrr@10
75
+ value: 0.5914
76
+ name: Mrr@10
77
+ - type: ndcg@10
78
+ value: 0.6488
79
+ name: Ndcg@10
80
+ - task:
81
+ type: cross-encoder-nano-beir
82
+ name: Cross Encoder Nano BEIR
83
+ dataset:
84
+ name: NanoBEIR R100 mean
85
+ type: NanoBEIR_R100_mean
86
+ metrics:
87
+ - type: map
88
+ value: 0.4682
89
+ name: Map
90
+ - type: mrr@10
91
+ value: 0.5427
92
+ name: Mrr@10
93
+ - type: ndcg@10
94
+ value: 0.5219
95
+ name: Ndcg@10
96
+ ---
97
+
98
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
99
+
100
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
101
+
102
+ ## Model Details
103
+
104
+ ### Model Description
105
+ - **Model Type:** Cross Encoder
106
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
107
+ - **Maximum Sequence Length:** 512 tokens
108
+ - **Number of Output Labels:** 1 label
109
+ - **Training Dataset:**
110
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
111
+ - **Language:** en
112
+ <!-- - **License:** Unknown -->
113
+
114
+ ### Model Sources
115
+
116
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
117
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
118
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
119
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
120
+
121
+ ## Usage
122
+
123
+ ### Direct Usage (Sentence Transformers)
124
+
125
+ First install the Sentence Transformers library:
126
+
127
+ ```bash
128
+ pip install -U sentence-transformers
129
+ ```
130
+
131
+ Then you can load this model and run inference.
132
+ ```python
133
+ from sentence_transformers import CrossEncoder
134
+
135
+ # Download from the 🤗 Hub
136
+ model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-seeded")
137
+ # Get scores for pairs of texts
138
+ pairs = [
139
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
140
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
141
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
142
+ ]
143
+ scores = model.predict(pairs)
144
+ print(scores.shape)
145
+ # (3,)
146
+
147
+ # Or rank different texts based on similarity to a single text
148
+ ranks = model.rank(
149
+ 'How many calories in an egg',
150
+ [
151
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
152
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
153
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
154
+ ]
155
+ )
156
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
157
+ ```
158
+
159
+ <!--
160
+ ### Direct Usage (Transformers)
161
+
162
+ <details><summary>Click to see the direct usage in Transformers</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Downstream Usage (Sentence Transformers)
169
+
170
+ You can finetune this model on your own dataset.
171
+
172
+ <details><summary>Click to expand</summary>
173
+
174
+ </details>
175
+ -->
176
+
177
+ <!--
178
+ ### Out-of-Scope Use
179
+
180
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
181
+ -->
182
+
183
+ ## Evaluation
184
+
185
+ ### Metrics
186
+
187
+ #### Cross Encoder Reranking
188
+
189
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
190
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
191
+ ```json
192
+ {
193
+ "at_k": 10,
194
+ "always_rerank_positives": true
195
+ }
196
+ ```
197
+
198
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
199
+ |:------------|:---------------------|:---------------------|:---------------------|
200
+ | map | 0.4900 (+0.0004) | 0.3317 (+0.0707) | 0.5829 (+0.1632) |
201
+ | mrr@10 | 0.4792 (+0.0017) | 0.5575 (+0.0577) | 0.5914 (+0.1647) |
202
+ | **ndcg@10** | **0.5526 (+0.0122)** | **0.3642 (+0.0391)** | **0.6488 (+0.1481)** |
203
+
204
+ #### Cross Encoder Nano BEIR
205
+
206
+ * Dataset: `NanoBEIR_R100_mean`
207
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
208
+ ```json
209
+ {
210
+ "dataset_names": [
211
+ "msmarco",
212
+ "nfcorpus",
213
+ "nq"
214
+ ],
215
+ "rerank_k": 100,
216
+ "at_k": 10,
217
+ "always_rerank_positives": true
218
+ }
219
+ ```
220
+
221
+ | Metric | Value |
222
+ |:------------|:---------------------|
223
+ | map | 0.4682 (+0.0781) |
224
+ | mrr@10 | 0.5427 (+0.0747) |
225
+ | **ndcg@10** | **0.5219 (+0.0665)** |
226
+
227
+ <!--
228
+ ## Bias, Risks and Limitations
229
+
230
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
231
+ -->
232
+
233
+ <!--
234
+ ### Recommendations
235
+
236
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
237
+ -->
238
+
239
+ ## Training Details
240
+
241
+ ### Training Dataset
242
+
243
+ #### ms_marco
244
+
245
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
246
+ * Size: 78,704 training samples
247
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
248
+ * Approximate statistics based on the first 1000 samples:
249
+ | | query | docs | labels |
250
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
251
+ | type | string | list | list |
252
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.61 characters</li><li>max: 85 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |
253
+ * Samples:
254
+ | query | docs | labels |
255
+ |:------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
256
+ | <code>what does syllables mean</code> | <code>['A syllable is a unit of organization for a sequence of speech sounds. For example, the word water is composed of two syllables: wa and ter. A syllable is typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological building blocks of words. They can influence the rhythm of a language, its prosody, its poetic meter and its stress patterns. The first syllable of a word is the initial syllable and the last syllable is the final syllable. In languages accented on one of the last three syllables, the last syllable is called the ultima, the next-to-last is called the penult, and the third syllable from the end is called the antepenult.', '1 A unit of pronunciation having one vowel sound, with or without surrounding consonants, forming the whole or a part of a word; for example, there are two syllables in water and three in inferno. Example sentences. 1 The vowels of the stresse...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
257
+ | <code>how long does it take to become a child psychiatrist</code> | <code>["The Path to Becoming a Psychologist. First, you will need a bachelor's degree (4 to 5 years), which teaches the fundamentals of psychology. After that, you will need a master's degree (2 to 3 years), which can qualify you to practice in the field as a case manager, employment specialist, or social worker.", 'For example, becoming a school psychologist can take a little as two years of graduate-level education, and only requires a master’s degree. On the other hand, if you want to become a child psychologist you will need to earn a doctorate degree, which can require up to seven additional years of psychologist schooling.', '1 During the first four years of medical school you take classes, do lab work, and learn about medical ethics. 2 You may not have the opportunity to do hands-on psychiatry work at this stage, but earning your medical degree is a requirement in the path to becoming a psychiatrist, so stick with it.', '1 Clinical Psychologist: Doctorate Degree in Psychology (4 to 7...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
258
+ | <code>how do great horned owls defend themselves</code> | <code>["Owls can't always successfully defend themselves from other animals, particularly their prey. Great horned owls, for example, are often found either dead or injured as a result of would-be prey like skunks and porcupines fighting back. Feet and Beak. Like other birds in the raptor group, owls of all species use their beaks and talons to defend themselves. An owl's feet are equipped with particularly long, sharp and curved claws, which he can dig into an adversary and use like hooks to tear and rip at flesh.", "Tom Brakefield/Stockbyte/Getty Images. Owls are raptors, birds of prey. They provide sustenance and defend themselves with strong, sharp breaks and talons. The owl's ability to avoid detection is perhaps the most important weapon in his defensive arsenal, since it allows him to avoid confrontation in the first place. Feet and Beak. Like other birds in the raptor group, owls of all species use their beaks and talons to defend themselves. An owl's feet are equipped with particula...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
259
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
260
+ ```json
261
+ {
262
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
263
+ "activation_fct": "torch.nn.modules.linear.Identity",
264
+ "mini_batch_size": 16,
265
+ "respect_input_order": true
266
+ }
267
+ ```
268
+
269
+ ### Evaluation Dataset
270
+
271
+ #### ms_marco
272
+
273
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
274
+ * Size: 1,000 evaluation samples
275
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
276
+ * Approximate statistics based on the first 1000 samples:
277
+ | | query | docs | labels |
278
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
279
+ | type | string | list | list |
280
+ | details | <ul><li>min: 12 characters</li><li>mean: 33.62 characters</li><li>max: 99 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
281
+ * Samples:
282
+ | query | docs | labels |
283
+ |:------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
284
+ | <code>what age do kids fly free?</code> | <code>["If you're taking a domestic flight with your infant, your airline will likely allow the baby to fly at no cost -- provided you hold him on your lap throughout the flight. Generally, American Airlines allows children younger than two years of age to fly for free with a parent or another adult over the age of 18. You'll save cash, though you'll likely be uncomfortable after a short time unless you're traveling with a partner or other adult who can take turns holding the baby. ", "Unaccompanied Minor Program. The Unaccompanied Minor Program is required for all children 5-14 years old when not traveling in the same compartment with an adult who is at least 18 years old or the child's parent/legal guardian. The program is optional for children 15-17 years old. ", 'Most airlines let under 2 fly for free (not under 3).If flying internationally,taxes or a small service fee usually 10% of adult fare will have to be paid. ANOTHER ANSWER I totally agree with answer #2. Whether you have a newbor...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
285
+ | <code>extensor muscles of the hand that are innervated by radial nerve</code> | <code>['Extrinsic muscles of the hand innervated by the radial nerve. extensor digitorum communis (EDC), extensor digiti minimi (EDM), extensor indicis, extensor pollicis longus (EPL), extensor pollicis brevis (EPB), abductor pollicis longus (APL).', 'The radial nerve contributed 1 to 3 branches to the brachialis in 10 of 20 specimens. In all specimens, the radial nerve innervated all of the extensor fore-arm muscles. In 2 of 20 specimens, there was an extensor medius proprius (EMP) muscle.', 'The thenar muscles are three short muscles located at the base of the thumb. The muscle bellies produce a bulge, known as the thenar eminence. They are responsible for the fine movements of the thumb. The median nerve innervates all the thenar muscles.', 'A total of 27 bones constitute the basic skeleton of the wrist and hand. The hand is innervated by 3 nerves — the median, ulnar, and radial nerves — each of which has sensory and motor components. The muscles of the hand are divided into intrinsic and...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
286
+ | <code>what does domestic limited liability company mean</code> | <code>['2. Domestic limited liability company means an entity that is an unincorporated association having one or more members and that is organized under ORS chapter 63. 4. Look beforeyou eat. Portland-area restaurant health scores. 1. Domestic limited liability company means an entity that is an unincorporated association having one or more members and that is organized under ORS chapter 63.', 'To register a Domestic Limited Liability Company in Hawaii, you must file the Articles of Organization for Limited Liability Company Form LLC-1 with the appropriate filing fee(s) . Use the links above to register and pay online or to access our fillable PDF forms which you can print and mail in with your payment. ', "I was talking to someone the other day who has a limited liability company (LLC). She is doing business in several states and she said she was told she must register as a foreign LLC in each state. She wondered why it was called a foreign LLC, since she wasn't doing business outside t...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
287
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
288
+ ```json
289
+ {
290
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
291
+ "activation_fct": "torch.nn.modules.linear.Identity",
292
+ "mini_batch_size": 16,
293
+ "respect_input_order": true
294
+ }
295
+ ```
296
+
297
+ ### Training Hyperparameters
298
+ #### Non-Default Hyperparameters
299
+
300
+ - `eval_strategy`: steps
301
+ - `per_device_train_batch_size`: 16
302
+ - `per_device_eval_batch_size`: 16
303
+ - `learning_rate`: 2e-05
304
+ - `num_train_epochs`: 1
305
+ - `warmup_ratio`: 0.1
306
+ - `seed`: 12
307
+ - `bf16`: True
308
+ - `load_best_model_at_end`: True
309
+
310
+ #### All Hyperparameters
311
+ <details><summary>Click to expand</summary>
312
+
313
+ - `overwrite_output_dir`: False
314
+ - `do_predict`: False
315
+ - `eval_strategy`: steps
316
+ - `prediction_loss_only`: True
317
+ - `per_device_train_batch_size`: 16
318
+ - `per_device_eval_batch_size`: 16
319
+ - `per_gpu_train_batch_size`: None
320
+ - `per_gpu_eval_batch_size`: None
321
+ - `gradient_accumulation_steps`: 1
322
+ - `eval_accumulation_steps`: None
323
+ - `torch_empty_cache_steps`: None
324
+ - `learning_rate`: 2e-05
325
+ - `weight_decay`: 0.0
326
+ - `adam_beta1`: 0.9
327
+ - `adam_beta2`: 0.999
328
+ - `adam_epsilon`: 1e-08
329
+ - `max_grad_norm`: 1.0
330
+ - `num_train_epochs`: 1
331
+ - `max_steps`: -1
332
+ - `lr_scheduler_type`: linear
333
+ - `lr_scheduler_kwargs`: {}
334
+ - `warmup_ratio`: 0.1
335
+ - `warmup_steps`: 0
336
+ - `log_level`: passive
337
+ - `log_level_replica`: warning
338
+ - `log_on_each_node`: True
339
+ - `logging_nan_inf_filter`: True
340
+ - `save_safetensors`: True
341
+ - `save_on_each_node`: False
342
+ - `save_only_model`: False
343
+ - `restore_callback_states_from_checkpoint`: False
344
+ - `no_cuda`: False
345
+ - `use_cpu`: False
346
+ - `use_mps_device`: False
347
+ - `seed`: 12
348
+ - `data_seed`: None
349
+ - `jit_mode_eval`: False
350
+ - `use_ipex`: False
351
+ - `bf16`: True
352
+ - `fp16`: False
353
+ - `fp16_opt_level`: O1
354
+ - `half_precision_backend`: auto
355
+ - `bf16_full_eval`: False
356
+ - `fp16_full_eval`: False
357
+ - `tf32`: None
358
+ - `local_rank`: 0
359
+ - `ddp_backend`: None
360
+ - `tpu_num_cores`: None
361
+ - `tpu_metrics_debug`: False
362
+ - `debug`: []
363
+ - `dataloader_drop_last`: False
364
+ - `dataloader_num_workers`: 0
365
+ - `dataloader_prefetch_factor`: None
366
+ - `past_index`: -1
367
+ - `disable_tqdm`: False
368
+ - `remove_unused_columns`: True
369
+ - `label_names`: None
370
+ - `load_best_model_at_end`: True
371
+ - `ignore_data_skip`: False
372
+ - `fsdp`: []
373
+ - `fsdp_min_num_params`: 0
374
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
375
+ - `fsdp_transformer_layer_cls_to_wrap`: None
376
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
377
+ - `deepspeed`: None
378
+ - `label_smoothing_factor`: 0.0
379
+ - `optim`: adamw_torch
380
+ - `optim_args`: None
381
+ - `adafactor`: False
382
+ - `group_by_length`: False
383
+ - `length_column_name`: length
384
+ - `ddp_find_unused_parameters`: None
385
+ - `ddp_bucket_cap_mb`: None
386
+ - `ddp_broadcast_buffers`: False
387
+ - `dataloader_pin_memory`: True
388
+ - `dataloader_persistent_workers`: False
389
+ - `skip_memory_metrics`: True
390
+ - `use_legacy_prediction_loop`: False
391
+ - `push_to_hub`: False
392
+ - `resume_from_checkpoint`: None
393
+ - `hub_model_id`: None
394
+ - `hub_strategy`: every_save
395
+ - `hub_private_repo`: None
396
+ - `hub_always_push`: False
397
+ - `gradient_checkpointing`: False
398
+ - `gradient_checkpointing_kwargs`: None
399
+ - `include_inputs_for_metrics`: False
400
+ - `include_for_metrics`: []
401
+ - `eval_do_concat_batches`: True
402
+ - `fp16_backend`: auto
403
+ - `push_to_hub_model_id`: None
404
+ - `push_to_hub_organization`: None
405
+ - `mp_parameters`:
406
+ - `auto_find_batch_size`: False
407
+ - `full_determinism`: False
408
+ - `torchdynamo`: None
409
+ - `ray_scope`: last
410
+ - `ddp_timeout`: 1800
411
+ - `torch_compile`: False
412
+ - `torch_compile_backend`: None
413
+ - `torch_compile_mode`: None
414
+ - `dispatch_batches`: None
415
+ - `split_batches`: None
416
+ - `include_tokens_per_second`: False
417
+ - `include_num_input_tokens_seen`: False
418
+ - `neftune_noise_alpha`: None
419
+ - `optim_target_modules`: None
420
+ - `batch_eval_metrics`: False
421
+ - `eval_on_start`: False
422
+ - `use_liger_kernel`: False
423
+ - `eval_use_gather_object`: False
424
+ - `average_tokens_across_devices`: False
425
+ - `prompts`: None
426
+ - `batch_sampler`: batch_sampler
427
+ - `multi_dataset_batch_sampler`: proportional
428
+
429
+ </details>
430
+
431
+ ### Training Logs
432
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
433
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
434
+ | -1 | -1 | - | - | 0.0300 (-0.5104) | 0.2528 (-0.0723) | 0.0168 (-0.4839) | 0.0999 (-0.3555) |
435
+ | 0.0002 | 1 | 2.2023 | - | - | - | - | - |
436
+ | 0.0508 | 250 | 2.1003 | - | - | - | - | - |
437
+ | 0.1016 | 500 | 1.9606 | 1.9318 | 0.2069 (-0.3335) | 0.2496 (-0.0755) | 0.2308 (-0.2699) | 0.2291 (-0.2263) |
438
+ | 0.1525 | 750 | 1.8932 | - | - | - | - | - |
439
+ | 0.2033 | 1000 | 1.8711 | 1.8656 | 0.4275 (-0.1129) | 0.2878 (-0.0372) | 0.4897 (-0.0109) | 0.4017 (-0.0537) |
440
+ | 0.2541 | 1250 | 1.8597 | - | - | - | - | - |
441
+ | 0.3049 | 1500 | 1.8486 | 1.8518 | 0.5873 (+0.0469) | 0.3577 (+0.0327) | 0.5874 (+0.0868) | 0.5108 (+0.0555) |
442
+ | 0.3558 | 1750 | 1.8415 | - | - | - | - | - |
443
+ | 0.4066 | 2000 | 1.8338 | 1.8441 | 0.5467 (+0.0062) | 0.3619 (+0.0368) | 0.5936 (+0.0929) | 0.5007 (+0.0453) |
444
+ | 0.4574 | 2250 | 1.8189 | - | - | - | - | - |
445
+ | 0.5082 | 2500 | 1.8338 | 1.8293 | 0.5523 (+0.0119) | 0.3676 (+0.0426) | 0.6452 (+0.1446) | 0.5217 (+0.0664) |
446
+ | 0.5591 | 2750 | 1.8109 | - | - | - | - | - |
447
+ | 0.6099 | 3000 | 1.8291 | 1.8306 | 0.5489 (+0.0085) | 0.3649 (+0.0398) | 0.6360 (+0.1353) | 0.5166 (+0.0612) |
448
+ | 0.6607 | 3250 | 1.8124 | - | - | - | - | - |
449
+ | **0.7115** | **3500** | **1.8205** | **1.8301** | **0.5526 (+0.0122)** | **0.3642 (+0.0391)** | **0.6488 (+0.1481)** | **0.5219 (+0.0665)** |
450
+ | 0.7624 | 3750 | 1.8166 | - | - | - | - | - |
451
+ | 0.8132 | 4000 | 1.8223 | 1.8205 | 0.5512 (+0.0108) | 0.3578 (+0.0328) | 0.6173 (+0.1167) | 0.5088 (+0.0534) |
452
+ | 0.8640 | 4250 | 1.8129 | - | - | - | - | - |
453
+ | 0.9148 | 4500 | 1.8132 | 1.8214 | 0.5364 (-0.0040) | 0.3603 (+0.0353) | 0.6257 (+0.1251) | 0.5075 (+0.0521) |
454
+ | 0.9656 | 4750 | 1.8188 | - | - | - | - | - |
455
+ | -1 | -1 | - | - | 0.5526 (+0.0122) | 0.3642 (+0.0391) | 0.6488 (+0.1481) | 0.5219 (+0.0665) |
456
+
457
+ * The bold row denotes the saved checkpoint.
458
+
459
+ ### Environmental Impact
460
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
461
+ - **Energy Consumed**: 0.239 kWh
462
+ - **Carbon Emitted**: 0.093 kg of CO2
463
+ - **Hours Used**: 0.972 hours
464
+
465
+ ### Training Hardware
466
+ - **On Cloud**: No
467
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
468
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
469
+ - **RAM Size**: 31.78 GB
470
+
471
+ ### Framework Versions
472
+ - Python: 3.11.6
473
+ - Sentence Transformers: 3.5.0.dev0
474
+ - Transformers: 4.49.0
475
+ - PyTorch: 2.6.0+cu124
476
+ - Accelerate: 1.5.1
477
+ - Datasets: 3.3.2
478
+ - Tokenizers: 0.21.0
479
+
480
+ ## Citation
481
+
482
+ ### BibTeX
483
+
484
+ #### Sentence Transformers
485
+ ```bibtex
486
+ @inproceedings{reimers-2019-sentence-bert,
487
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
488
+ author = "Reimers, Nils and Gurevych, Iryna",
489
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
490
+ month = "11",
491
+ year = "2019",
492
+ publisher = "Association for Computational Linguistics",
493
+ url = "https://arxiv.org/abs/1908.10084",
494
+ }
495
+ ```
496
+
497
+ #### PListMLELoss
498
+ ```bibtex
499
+ @inproceedings{lan2014position,
500
+ title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
501
+ author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
502
+ booktitle={UAI},
503
+ volume={14},
504
+ pages={449--458},
505
+ year={2014}
506
+ }
507
+ ```
508
+
509
+ <!--
510
+ ## Glossary
511
+
512
+ *Clearly define terms in order to be accessible across audiences.*
513
+ -->
514
+
515
+ <!--
516
+ ## Model Card Authors
517
+
518
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
519
+ -->
520
+
521
+ <!--
522
+ ## Model Card Contact
523
+
524
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
525
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6265160be0dedc47947c45ec9fee0d730820e13bd52f0a33e6994604bdf729b
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff