seank333111 commited on
Commit
7131b4f
·
verified ·
1 Parent(s): 83ae2a8

Add new SparseEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,810 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - asymmetric
10
+ - inference-free
11
+ - splade
12
+ - generated_from_trainer
13
+ - dataset_size:99000
14
+ - loss:SpladeLoss
15
+ - loss:SparseMultipleNegativesRankingLoss
16
+ - loss:FlopsLoss
17
+ base_model: distilbert/distilbert-base-uncased
18
+ widget:
19
+ - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
20
+ continue to take somewhat differing stances on regional conflicts such the Yemeni
21
+ Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
22
+ which has fought against Saudi-backed forces, and the Syrian Civil War, where
23
+ the UAE has disagreed with Saudi support for Islamist movements.[4]
24
+ - text: Economy of New Zealand New Zealand's diverse market economy has a sizable
25
+ service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
26
+ manufacturing industries include aluminium production, food processing, metal
27
+ fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
28
+ water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
29
+ sector continues to dominate New Zealand's exports, despite accounting for 6.5%
30
+ of GDP in 2013.[17]
31
+ - text: who was the first president of indian science congress meeting held in kolkata
32
+ in 1914
33
+ - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
34
+ a single after a fourteen-year breakup. It was also the first song written by
35
+ bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
36
+ played live for the first time during their Hell Freezes Over tour in 1994. It
37
+ returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
38
+ No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
39
+ Rock Tracks chart. The song was not played live by the Eagles after the "Hell
40
+ Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
41
+ - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
42
+ who is considered by Christians to be one of the first Gentiles to convert to
43
+ the faith, as related in Acts of the Apostles.'
44
+ datasets:
45
+ - sentence-transformers/natural-questions
46
+ pipeline_tag: feature-extraction
47
+ library_name: sentence-transformers
48
+ metrics:
49
+ - dot_accuracy@1
50
+ - dot_accuracy@3
51
+ - dot_accuracy@5
52
+ - dot_accuracy@10
53
+ - dot_precision@1
54
+ - dot_precision@3
55
+ - dot_precision@5
56
+ - dot_precision@10
57
+ - dot_recall@1
58
+ - dot_recall@3
59
+ - dot_recall@5
60
+ - dot_recall@10
61
+ - dot_ndcg@10
62
+ - dot_mrr@10
63
+ - dot_map@100
64
+ - query_active_dims
65
+ - query_sparsity_ratio
66
+ - corpus_active_dims
67
+ - corpus_sparsity_ratio
68
+ model-index:
69
+ - name: Inference-free SPLADE distilbert-base-uncased trained on Natural-Questions
70
+ tuples
71
+ results:
72
+ - task:
73
+ type: sparse-information-retrieval
74
+ name: Sparse Information Retrieval
75
+ dataset:
76
+ name: NanoMSMARCO
77
+ type: NanoMSMARCO
78
+ metrics:
79
+ - type: dot_accuracy@1
80
+ value: 0.32
81
+ name: Dot Accuracy@1
82
+ - type: dot_accuracy@3
83
+ value: 0.52
84
+ name: Dot Accuracy@3
85
+ - type: dot_accuracy@5
86
+ value: 0.6
87
+ name: Dot Accuracy@5
88
+ - type: dot_accuracy@10
89
+ value: 0.8
90
+ name: Dot Accuracy@10
91
+ - type: dot_precision@1
92
+ value: 0.32
93
+ name: Dot Precision@1
94
+ - type: dot_precision@3
95
+ value: 0.1733333333333333
96
+ name: Dot Precision@3
97
+ - type: dot_precision@5
98
+ value: 0.12000000000000002
99
+ name: Dot Precision@5
100
+ - type: dot_precision@10
101
+ value: 0.08
102
+ name: Dot Precision@10
103
+ - type: dot_recall@1
104
+ value: 0.32
105
+ name: Dot Recall@1
106
+ - type: dot_recall@3
107
+ value: 0.52
108
+ name: Dot Recall@3
109
+ - type: dot_recall@5
110
+ value: 0.6
111
+ name: Dot Recall@5
112
+ - type: dot_recall@10
113
+ value: 0.8
114
+ name: Dot Recall@10
115
+ - type: dot_ndcg@10
116
+ value: 0.5294275268594165
117
+ name: Dot Ndcg@10
118
+ - type: dot_mrr@10
119
+ value: 0.44701587301587303
120
+ name: Dot Mrr@10
121
+ - type: dot_map@100
122
+ value: 0.4547435439455525
123
+ name: Dot Map@100
124
+ - type: query_active_dims
125
+ value: 6.380000114440918
126
+ name: Query Active Dims
127
+ - type: query_sparsity_ratio
128
+ value: 0.9997909704437966
129
+ name: Query Sparsity Ratio
130
+ - type: corpus_active_dims
131
+ value: 56.05611801147461
132
+ name: Corpus Active Dims
133
+ - type: corpus_sparsity_ratio
134
+ value: 0.9981634192382061
135
+ name: Corpus Sparsity Ratio
136
+ - task:
137
+ type: sparse-information-retrieval
138
+ name: Sparse Information Retrieval
139
+ dataset:
140
+ name: NanoNFCorpus
141
+ type: NanoNFCorpus
142
+ metrics:
143
+ - type: dot_accuracy@1
144
+ value: 0.44
145
+ name: Dot Accuracy@1
146
+ - type: dot_accuracy@3
147
+ value: 0.48
148
+ name: Dot Accuracy@3
149
+ - type: dot_accuracy@5
150
+ value: 0.54
151
+ name: Dot Accuracy@5
152
+ - type: dot_accuracy@10
153
+ value: 0.58
154
+ name: Dot Accuracy@10
155
+ - type: dot_precision@1
156
+ value: 0.44
157
+ name: Dot Precision@1
158
+ - type: dot_precision@3
159
+ value: 0.3533333333333333
160
+ name: Dot Precision@3
161
+ - type: dot_precision@5
162
+ value: 0.32800000000000007
163
+ name: Dot Precision@5
164
+ - type: dot_precision@10
165
+ value: 0.24600000000000002
166
+ name: Dot Precision@10
167
+ - type: dot_recall@1
168
+ value: 0.04296449405849682
169
+ name: Dot Recall@1
170
+ - type: dot_recall@3
171
+ value: 0.07246863989183633
172
+ name: Dot Recall@3
173
+ - type: dot_recall@5
174
+ value: 0.09285358111876901
175
+ name: Dot Recall@5
176
+ - type: dot_recall@10
177
+ value: 0.11634922767333658
178
+ name: Dot Recall@10
179
+ - type: dot_ndcg@10
180
+ value: 0.31292844524261265
181
+ name: Dot Ndcg@10
182
+ - type: dot_mrr@10
183
+ value: 0.47585714285714287
184
+ name: Dot Mrr@10
185
+ - type: dot_map@100
186
+ value: 0.13754623990324893
187
+ name: Dot Map@100
188
+ - type: query_active_dims
189
+ value: 4.760000228881836
190
+ name: Query Active Dims
191
+ - type: query_sparsity_ratio
192
+ value: 0.999844046909479
193
+ name: Query Sparsity Ratio
194
+ - type: corpus_active_dims
195
+ value: 69.88655853271484
196
+ name: Corpus Active Dims
197
+ - type: corpus_sparsity_ratio
198
+ value: 0.9977102890199622
199
+ name: Corpus Sparsity Ratio
200
+ - task:
201
+ type: sparse-information-retrieval
202
+ name: Sparse Information Retrieval
203
+ dataset:
204
+ name: NanoNQ
205
+ type: NanoNQ
206
+ metrics:
207
+ - type: dot_accuracy@1
208
+ value: 0.38
209
+ name: Dot Accuracy@1
210
+ - type: dot_accuracy@3
211
+ value: 0.62
212
+ name: Dot Accuracy@3
213
+ - type: dot_accuracy@5
214
+ value: 0.68
215
+ name: Dot Accuracy@5
216
+ - type: dot_accuracy@10
217
+ value: 0.74
218
+ name: Dot Accuracy@10
219
+ - type: dot_precision@1
220
+ value: 0.38
221
+ name: Dot Precision@1
222
+ - type: dot_precision@3
223
+ value: 0.20666666666666664
224
+ name: Dot Precision@3
225
+ - type: dot_precision@5
226
+ value: 0.136
227
+ name: Dot Precision@5
228
+ - type: dot_precision@10
229
+ value: 0.07600000000000001
230
+ name: Dot Precision@10
231
+ - type: dot_recall@1
232
+ value: 0.37
233
+ name: Dot Recall@1
234
+ - type: dot_recall@3
235
+ value: 0.58
236
+ name: Dot Recall@3
237
+ - type: dot_recall@5
238
+ value: 0.64
239
+ name: Dot Recall@5
240
+ - type: dot_recall@10
241
+ value: 0.71
242
+ name: Dot Recall@10
243
+ - type: dot_ndcg@10
244
+ value: 0.5476944409397304
245
+ name: Dot Ndcg@10
246
+ - type: dot_mrr@10
247
+ value: 0.5072222222222222
248
+ name: Dot Mrr@10
249
+ - type: dot_map@100
250
+ value: 0.4973273986984246
251
+ name: Dot Map@100
252
+ - type: query_active_dims
253
+ value: 9.4399995803833
254
+ name: Query Active Dims
255
+ - type: query_sparsity_ratio
256
+ value: 0.9996907149079227
257
+ name: Query Sparsity Ratio
258
+ - type: corpus_active_dims
259
+ value: 51.11539077758789
260
+ name: Corpus Active Dims
261
+ - type: corpus_sparsity_ratio
262
+ value: 0.998325293533268
263
+ name: Corpus Sparsity Ratio
264
+ - task:
265
+ type: sparse-nano-beir
266
+ name: Sparse Nano BEIR
267
+ dataset:
268
+ name: NanoBEIR mean
269
+ type: NanoBEIR_mean
270
+ metrics:
271
+ - type: dot_accuracy@1
272
+ value: 0.38000000000000006
273
+ name: Dot Accuracy@1
274
+ - type: dot_accuracy@3
275
+ value: 0.54
276
+ name: Dot Accuracy@3
277
+ - type: dot_accuracy@5
278
+ value: 0.6066666666666668
279
+ name: Dot Accuracy@5
280
+ - type: dot_accuracy@10
281
+ value: 0.7066666666666667
282
+ name: Dot Accuracy@10
283
+ - type: dot_precision@1
284
+ value: 0.38000000000000006
285
+ name: Dot Precision@1
286
+ - type: dot_precision@3
287
+ value: 0.24444444444444444
288
+ name: Dot Precision@3
289
+ - type: dot_precision@5
290
+ value: 0.19466666666666668
291
+ name: Dot Precision@5
292
+ - type: dot_precision@10
293
+ value: 0.134
294
+ name: Dot Precision@10
295
+ - type: dot_recall@1
296
+ value: 0.24432149801949896
297
+ name: Dot Recall@1
298
+ - type: dot_recall@3
299
+ value: 0.39082287996394544
300
+ name: Dot Recall@3
301
+ - type: dot_recall@5
302
+ value: 0.4442845270395897
303
+ name: Dot Recall@5
304
+ - type: dot_recall@10
305
+ value: 0.5421164092244455
306
+ name: Dot Recall@10
307
+ - type: dot_ndcg@10
308
+ value: 0.46335013768058647
309
+ name: Dot Ndcg@10
310
+ - type: dot_mrr@10
311
+ value: 0.4766984126984127
312
+ name: Dot Mrr@10
313
+ - type: dot_map@100
314
+ value: 0.36320572751574204
315
+ name: Dot Map@100
316
+ - type: query_active_dims
317
+ value: 6.859999974568685
318
+ name: Query Active Dims
319
+ - type: query_sparsity_ratio
320
+ value: 0.9997752440870661
321
+ name: Query Sparsity Ratio
322
+ - type: corpus_active_dims
323
+ value: 57.281252631734205
324
+ name: Corpus Active Dims
325
+ - type: corpus_sparsity_ratio
326
+ value: 0.9981232798430071
327
+ name: Corpus Sparsity Ratio
328
+ ---
329
+
330
+ # Inference-free SPLADE distilbert-base-uncased trained on Natural-Questions tuples
331
+
332
+ This is a [Asymmetric Inference-free SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
333
+ ## Model Details
334
+
335
+ ### Model Description
336
+ - **Model Type:** Asymmetric Inference-free SPLADE Sparse Encoder
337
+ - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
338
+ - **Maximum Sequence Length:** 512 tokens
339
+ - **Output Dimensionality:** 30522 dimensions
340
+ - **Similarity Function:** Dot Product
341
+ - **Training Dataset:**
342
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
343
+ - **Language:** en
344
+ - **License:** apache-2.0
345
+
346
+ ### Model Sources
347
+
348
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
349
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
350
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
351
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
352
+
353
+ ### Full Model Architecture
354
+
355
+ ```
356
+ SparseEncoder(
357
+ (0): Router(
358
+ (query_0_SparseStaticEmbedding): SparseStaticEmbedding({'frozen': False}, dim=30522, tokenizer=DistilBertTokenizerFast)
359
+ (document_0_MLMTransformer): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'DistilBertForMaskedLM'})
360
+ (document_1_SpladePooling): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
361
+ )
362
+ )
363
+ ```
364
+
365
+ ## Usage
366
+
367
+ ### Direct Usage (Sentence Transformers)
368
+
369
+ First install the Sentence Transformers library:
370
+
371
+ ```bash
372
+ pip install -U sentence-transformers
373
+ ```
374
+
375
+ Then you can load this model and run inference.
376
+ ```python
377
+ from sentence_transformers import SparseEncoder
378
+
379
+ # Download from the 🤗 Hub
380
+ model = SparseEncoder("seank333111/inference-free-splade-distilbert-base-uncased-nq")
381
+ # Run inference
382
+ queries = [
383
+ "who is cornelius in the book of acts",
384
+ ]
385
+ documents = [
386
+ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
387
+ "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
388
+ 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
389
+ ]
390
+ query_embeddings = model.encode_query(queries)
391
+ document_embeddings = model.encode_document(documents)
392
+ print(query_embeddings.shape, document_embeddings.shape)
393
+ # [1, 30522] [3, 30522]
394
+
395
+ # Get the similarity scores for the embeddings
396
+ similarities = model.similarity(query_embeddings, document_embeddings)
397
+ print(similarities)
398
+ # tensor([[5.9751, 0.2390, 0.0000]])
399
+ ```
400
+
401
+ <!--
402
+ ### Direct Usage (Transformers)
403
+
404
+ <details><summary>Click to see the direct usage in Transformers</summary>
405
+
406
+ </details>
407
+ -->
408
+
409
+ <!--
410
+ ### Downstream Usage (Sentence Transformers)
411
+
412
+ You can finetune this model on your own dataset.
413
+
414
+ <details><summary>Click to expand</summary>
415
+
416
+ </details>
417
+ -->
418
+
419
+ <!--
420
+ ### Out-of-Scope Use
421
+
422
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
423
+ -->
424
+
425
+ ## Evaluation
426
+
427
+ ### Metrics
428
+
429
+ #### Sparse Information Retrieval
430
+
431
+ * Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
432
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
433
+
434
+ | Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
435
+ |:----------------------|:------------|:-------------|:-----------|
436
+ | dot_accuracy@1 | 0.32 | 0.44 | 0.38 |
437
+ | dot_accuracy@3 | 0.52 | 0.48 | 0.62 |
438
+ | dot_accuracy@5 | 0.6 | 0.54 | 0.68 |
439
+ | dot_accuracy@10 | 0.8 | 0.58 | 0.74 |
440
+ | dot_precision@1 | 0.32 | 0.44 | 0.38 |
441
+ | dot_precision@3 | 0.1733 | 0.3533 | 0.2067 |
442
+ | dot_precision@5 | 0.12 | 0.328 | 0.136 |
443
+ | dot_precision@10 | 0.08 | 0.246 | 0.076 |
444
+ | dot_recall@1 | 0.32 | 0.043 | 0.37 |
445
+ | dot_recall@3 | 0.52 | 0.0725 | 0.58 |
446
+ | dot_recall@5 | 0.6 | 0.0929 | 0.64 |
447
+ | dot_recall@10 | 0.8 | 0.1163 | 0.71 |
448
+ | **dot_ndcg@10** | **0.5294** | **0.3129** | **0.5477** |
449
+ | dot_mrr@10 | 0.447 | 0.4759 | 0.5072 |
450
+ | dot_map@100 | 0.4547 | 0.1375 | 0.4973 |
451
+ | query_active_dims | 6.38 | 4.76 | 9.44 |
452
+ | query_sparsity_ratio | 0.9998 | 0.9998 | 0.9997 |
453
+ | corpus_active_dims | 56.0561 | 69.8866 | 51.1154 |
454
+ | corpus_sparsity_ratio | 0.9982 | 0.9977 | 0.9983 |
455
+
456
+ #### Sparse Nano BEIR
457
+
458
+ * Dataset: `NanoBEIR_mean`
459
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
460
+ ```json
461
+ {
462
+ "dataset_names": [
463
+ "msmarco",
464
+ "nfcorpus",
465
+ "nq"
466
+ ]
467
+ }
468
+ ```
469
+
470
+ | Metric | Value |
471
+ |:----------------------|:-----------|
472
+ | dot_accuracy@1 | 0.38 |
473
+ | dot_accuracy@3 | 0.54 |
474
+ | dot_accuracy@5 | 0.6067 |
475
+ | dot_accuracy@10 | 0.7067 |
476
+ | dot_precision@1 | 0.38 |
477
+ | dot_precision@3 | 0.2444 |
478
+ | dot_precision@5 | 0.1947 |
479
+ | dot_precision@10 | 0.134 |
480
+ | dot_recall@1 | 0.2443 |
481
+ | dot_recall@3 | 0.3908 |
482
+ | dot_recall@5 | 0.4443 |
483
+ | dot_recall@10 | 0.5421 |
484
+ | **dot_ndcg@10** | **0.4634** |
485
+ | dot_mrr@10 | 0.4767 |
486
+ | dot_map@100 | 0.3632 |
487
+ | query_active_dims | 6.86 |
488
+ | query_sparsity_ratio | 0.9998 |
489
+ | corpus_active_dims | 57.2813 |
490
+ | corpus_sparsity_ratio | 0.9981 |
491
+
492
+ <!--
493
+ ## Bias, Risks and Limitations
494
+
495
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
496
+ -->
497
+
498
+ <!--
499
+ ### Recommendations
500
+
501
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
502
+ -->
503
+
504
+ ## Training Details
505
+
506
+ ### Training Dataset
507
+
508
+ #### natural-questions
509
+
510
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
511
+ * Size: 99,000 training samples
512
+ * Columns: <code>query</code> and <code>answer</code>
513
+ * Approximate statistics based on the first 1000 samples:
514
+ | | query | answer |
515
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
516
+ | type | string | string |
517
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
518
+ * Samples:
519
+ | query | answer |
520
+ |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
521
+ | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
522
+ | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
523
+ | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
524
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
525
+ ```json
526
+ {
527
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
528
+ "document_regularizer_weight": 0.003,
529
+ "query_regularizer_weight": 0
530
+ }
531
+ ```
532
+
533
+ ### Evaluation Dataset
534
+
535
+ #### natural-questions
536
+
537
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
538
+ * Size: 1,000 evaluation samples
539
+ * Columns: <code>query</code> and <code>answer</code>
540
+ * Approximate statistics based on the first 1000 samples:
541
+ | | query | answer |
542
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
543
+ | type | string | string |
544
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
545
+ * Samples:
546
+ | query | answer |
547
+ |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
548
+ | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
549
+ | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
550
+ | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
551
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
552
+ ```json
553
+ {
554
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
555
+ "document_regularizer_weight": 0.003,
556
+ "query_regularizer_weight": 0
557
+ }
558
+ ```
559
+
560
+ ### Training Hyperparameters
561
+ #### Non-Default Hyperparameters
562
+
563
+ - `eval_strategy`: steps
564
+ - `per_device_train_batch_size`: 16
565
+ - `per_device_eval_batch_size`: 16
566
+ - `learning_rate`: 2e-05
567
+ - `num_train_epochs`: 1
568
+ - `warmup_ratio`: 0.1
569
+ - `fp16`: True
570
+ - `batch_sampler`: no_duplicates
571
+ - `router_mapping`: {'query': 'query', 'answer': 'document'}
572
+ - `learning_rate_mapping`: {'SparseStaticEmbedding\\.weight': 0.001}
573
+
574
+ #### All Hyperparameters
575
+ <details><summary>Click to expand</summary>
576
+
577
+ - `overwrite_output_dir`: False
578
+ - `do_predict`: False
579
+ - `eval_strategy`: steps
580
+ - `prediction_loss_only`: True
581
+ - `per_device_train_batch_size`: 16
582
+ - `per_device_eval_batch_size`: 16
583
+ - `per_gpu_train_batch_size`: None
584
+ - `per_gpu_eval_batch_size`: None
585
+ - `gradient_accumulation_steps`: 1
586
+ - `eval_accumulation_steps`: None
587
+ - `torch_empty_cache_steps`: None
588
+ - `learning_rate`: 2e-05
589
+ - `weight_decay`: 0.0
590
+ - `adam_beta1`: 0.9
591
+ - `adam_beta2`: 0.999
592
+ - `adam_epsilon`: 1e-08
593
+ - `max_grad_norm`: 1.0
594
+ - `num_train_epochs`: 1
595
+ - `max_steps`: -1
596
+ - `lr_scheduler_type`: linear
597
+ - `lr_scheduler_kwargs`: {}
598
+ - `warmup_ratio`: 0.1
599
+ - `warmup_steps`: 0
600
+ - `log_level`: passive
601
+ - `log_level_replica`: warning
602
+ - `log_on_each_node`: True
603
+ - `logging_nan_inf_filter`: True
604
+ - `save_safetensors`: True
605
+ - `save_on_each_node`: False
606
+ - `save_only_model`: False
607
+ - `restore_callback_states_from_checkpoint`: False
608
+ - `no_cuda`: False
609
+ - `use_cpu`: False
610
+ - `use_mps_device`: False
611
+ - `seed`: 42
612
+ - `data_seed`: None
613
+ - `jit_mode_eval`: False
614
+ - `use_ipex`: False
615
+ - `bf16`: False
616
+ - `fp16`: True
617
+ - `fp16_opt_level`: O1
618
+ - `half_precision_backend`: auto
619
+ - `bf16_full_eval`: False
620
+ - `fp16_full_eval`: False
621
+ - `tf32`: None
622
+ - `local_rank`: 0
623
+ - `ddp_backend`: None
624
+ - `tpu_num_cores`: None
625
+ - `tpu_metrics_debug`: False
626
+ - `debug`: []
627
+ - `dataloader_drop_last`: False
628
+ - `dataloader_num_workers`: 0
629
+ - `dataloader_prefetch_factor`: None
630
+ - `past_index`: -1
631
+ - `disable_tqdm`: False
632
+ - `remove_unused_columns`: True
633
+ - `label_names`: None
634
+ - `load_best_model_at_end`: False
635
+ - `ignore_data_skip`: False
636
+ - `fsdp`: []
637
+ - `fsdp_min_num_params`: 0
638
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
639
+ - `fsdp_transformer_layer_cls_to_wrap`: None
640
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
641
+ - `deepspeed`: None
642
+ - `label_smoothing_factor`: 0.0
643
+ - `optim`: adamw_torch
644
+ - `optim_args`: None
645
+ - `adafactor`: False
646
+ - `group_by_length`: False
647
+ - `length_column_name`: length
648
+ - `ddp_find_unused_parameters`: None
649
+ - `ddp_bucket_cap_mb`: None
650
+ - `ddp_broadcast_buffers`: False
651
+ - `dataloader_pin_memory`: True
652
+ - `dataloader_persistent_workers`: False
653
+ - `skip_memory_metrics`: True
654
+ - `use_legacy_prediction_loop`: False
655
+ - `push_to_hub`: False
656
+ - `resume_from_checkpoint`: None
657
+ - `hub_model_id`: None
658
+ - `hub_strategy`: every_save
659
+ - `hub_private_repo`: None
660
+ - `hub_always_push`: False
661
+ - `hub_revision`: None
662
+ - `gradient_checkpointing`: False
663
+ - `gradient_checkpointing_kwargs`: None
664
+ - `include_inputs_for_metrics`: False
665
+ - `include_for_metrics`: []
666
+ - `eval_do_concat_batches`: True
667
+ - `fp16_backend`: auto
668
+ - `push_to_hub_model_id`: None
669
+ - `push_to_hub_organization`: None
670
+ - `mp_parameters`:
671
+ - `auto_find_batch_size`: False
672
+ - `full_determinism`: False
673
+ - `torchdynamo`: None
674
+ - `ray_scope`: last
675
+ - `ddp_timeout`: 1800
676
+ - `torch_compile`: False
677
+ - `torch_compile_backend`: None
678
+ - `torch_compile_mode`: None
679
+ - `include_tokens_per_second`: False
680
+ - `include_num_input_tokens_seen`: False
681
+ - `neftune_noise_alpha`: None
682
+ - `optim_target_modules`: None
683
+ - `batch_eval_metrics`: False
684
+ - `eval_on_start`: False
685
+ - `use_liger_kernel`: False
686
+ - `liger_kernel_config`: None
687
+ - `eval_use_gather_object`: False
688
+ - `average_tokens_across_devices`: False
689
+ - `prompts`: None
690
+ - `batch_sampler`: no_duplicates
691
+ - `multi_dataset_batch_sampler`: proportional
692
+ - `router_mapping`: {'query': 'query', 'answer': 'document'}
693
+ - `learning_rate_mapping`: {'SparseStaticEmbedding\\.weight': 0.001}
694
+
695
+ </details>
696
+
697
+ ### Training Logs
698
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
699
+ |:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
700
+ | 0.0323 | 200 | 0.2418 | - | - | - | - | - |
701
+ | 0.0646 | 400 | 0.0857 | - | - | - | - | - |
702
+ | 0.0970 | 600 | 0.072 | - | - | - | - | - |
703
+ | 0.1293 | 800 | 0.062 | - | - | - | - | - |
704
+ | 0.1616 | 1000 | 0.0624 | 0.0867 | 0.5213 | 0.3326 | 0.5296 | 0.4612 |
705
+ | 0.1939 | 1200 | 0.0684 | - | - | - | - | - |
706
+ | 0.2262 | 1400 | 0.0776 | - | - | - | - | - |
707
+ | 0.2586 | 1600 | 0.0824 | - | - | - | - | - |
708
+ | 0.2909 | 1800 | 0.0826 | - | - | - | - | - |
709
+ | 0.3232 | 2000 | 0.082 | 0.1028 | 0.5108 | 0.3230 | 0.5169 | 0.4502 |
710
+ | 0.3555 | 2200 | 0.0869 | - | - | - | - | - |
711
+ | 0.3878 | 2400 | 0.0866 | - | - | - | - | - |
712
+ | 0.4202 | 2600 | 0.0848 | - | - | - | - | - |
713
+ | 0.4525 | 2800 | 0.0816 | - | - | - | - | - |
714
+ | 0.4848 | 3000 | 0.0769 | 0.0914 | 0.5667 | 0.3149 | 0.5786 | 0.4867 |
715
+ | 0.5171 | 3200 | 0.0745 | - | - | - | - | - |
716
+ | 0.5495 | 3400 | 0.0831 | - | - | - | - | - |
717
+ | 0.5818 | 3600 | 0.0764 | - | - | - | - | - |
718
+ | 0.6141 | 3800 | 0.0806 | - | - | - | - | - |
719
+ | 0.6464 | 4000 | 0.0742 | 0.0885 | 0.5512 | 0.3221 | 0.5262 | 0.4665 |
720
+ | 0.6787 | 4200 | 0.0739 | - | - | - | - | - |
721
+ | 0.7111 | 4400 | 0.0674 | - | - | - | - | - |
722
+ | 0.7434 | 4600 | 0.0675 | - | - | - | - | - |
723
+ | 0.7757 | 4800 | 0.0741 | - | - | - | - | - |
724
+ | 0.8080 | 5000 | 0.0686 | 0.0827 | 0.5514 | 0.3146 | 0.5632 | 0.4764 |
725
+ | 0.8403 | 5200 | 0.0745 | - | - | - | - | - |
726
+ | 0.8727 | 5400 | 0.0687 | - | - | - | - | - |
727
+ | 0.9050 | 5600 | 0.0637 | - | - | - | - | - |
728
+ | 0.9373 | 5800 | 0.0637 | - | - | - | - | - |
729
+ | 0.9696 | 6000 | 0.0648 | 0.0785 | 0.5292 | 0.3117 | 0.5480 | 0.4630 |
730
+ | -1 | -1 | - | - | 0.5294 | 0.3129 | 0.5477 | 0.4634 |
731
+
732
+
733
+ ### Framework Versions
734
+ - Python: 3.11.13
735
+ - Sentence Transformers: 5.0.0
736
+ - Transformers: 4.53.0
737
+ - PyTorch: 2.6.0+cu124
738
+ - Accelerate: 1.8.1
739
+ - Datasets: 3.6.0
740
+ - Tokenizers: 0.21.2
741
+
742
+ ## Citation
743
+
744
+ ### BibTeX
745
+
746
+ #### Sentence Transformers
747
+ ```bibtex
748
+ @inproceedings{reimers-2019-sentence-bert,
749
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
750
+ author = "Reimers, Nils and Gurevych, Iryna",
751
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
752
+ month = "11",
753
+ year = "2019",
754
+ publisher = "Association for Computational Linguistics",
755
+ url = "https://arxiv.org/abs/1908.10084",
756
+ }
757
+ ```
758
+
759
+ #### SpladeLoss
760
+ ```bibtex
761
+ @misc{formal2022distillationhardnegativesampling,
762
+ title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
763
+ author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
764
+ year={2022},
765
+ eprint={2205.04733},
766
+ archivePrefix={arXiv},
767
+ primaryClass={cs.IR},
768
+ url={https://arxiv.org/abs/2205.04733},
769
+ }
770
+ ```
771
+
772
+ #### SparseMultipleNegativesRankingLoss
773
+ ```bibtex
774
+ @misc{henderson2017efficient,
775
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
776
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
777
+ year={2017},
778
+ eprint={1705.00652},
779
+ archivePrefix={arXiv},
780
+ primaryClass={cs.CL}
781
+ }
782
+ ```
783
+
784
+ #### FlopsLoss
785
+ ```bibtex
786
+ @article{paria2020minimizing,
787
+ title={Minimizing flops to learn efficient sparse representations},
788
+ author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
789
+ journal={arXiv preprint arXiv:2004.05665},
790
+ year={2020}
791
+ }
792
+ ```
793
+
794
+ <!--
795
+ ## Glossary
796
+
797
+ *Clearly define terms in order to be accessible across audiences.*
798
+ -->
799
+
800
+ <!--
801
+ ## Model Card Authors
802
+
803
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
804
+ -->
805
+
806
+ <!--
807
+ ## Model Card Contact
808
+
809
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
810
+ -->
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.53.0",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
document_0_MLMTransformer/config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertForMaskedLM"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "hidden_dim": 3072,
10
+ "initializer_range": 0.02,
11
+ "max_position_embeddings": 512,
12
+ "model_type": "distilbert",
13
+ "n_heads": 12,
14
+ "n_layers": 6,
15
+ "pad_token_id": 0,
16
+ "qa_dropout": 0.1,
17
+ "seq_classif_dropout": 0.2,
18
+ "sinusoidal_pos_embds": false,
19
+ "tie_weights_": true,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.53.0",
22
+ "vocab_size": 30522
23
+ }
document_0_MLMTransformer/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90eff0fe4420de6fabb137d5883a4d599d1734b5dc99241495dcfc3ed7ae48c7
3
+ size 267954768
document_0_MLMTransformer/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
document_0_MLMTransformer/special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
document_0_MLMTransformer/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
document_0_MLMTransformer/tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
document_0_MLMTransformer/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
document_1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": 30522
5
+ }
modules.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Router"
7
+ }
8
+ ]
query_0_SparseStaticEmbedding/config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "frozen": false
3
+ }
query_0_SparseStaticEmbedding/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1713e419e049ba2b0cd970ce7fc28d487200b4ea6957c26789b395d68a5fbbc
3
+ size 122168
query_0_SparseStaticEmbedding/special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
query_0_SparseStaticEmbedding/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
query_0_SparseStaticEmbedding/tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
query_0_SparseStaticEmbedding/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
router_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "types": {
3
+ "query_0_SparseStaticEmbedding": "sentence_transformers.sparse_encoder.models.SparseStaticEmbedding.SparseStaticEmbedding",
4
+ "document_0_MLMTransformer": "sentence_transformers.sparse_encoder.models.MLMTransformer.MLMTransformer",
5
+ "document_1_SpladePooling": "sentence_transformers.sparse_encoder.models.SpladePooling.SpladePooling"
6
+ },
7
+ "structure": {
8
+ "query": [
9
+ "query_0_SparseStaticEmbedding"
10
+ ],
11
+ "document": [
12
+ "document_0_MLMTransformer",
13
+ "document_1_SpladePooling"
14
+ ]
15
+ },
16
+ "parameters": {
17
+ "default_route": "document",
18
+ "allow_empty_key": true
19
+ }
20
+ }