Jgmorenof commited on
Commit
f740c23
·
verified ·
1 Parent(s): 9dff8e2

Add new SparseEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,815 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - asymmetric
10
+ - inference-free
11
+ - splade
12
+ - generated_from_trainer
13
+ - dataset_size:99000
14
+ - loss:SpladeLoss
15
+ - loss:SparseMultipleNegativesRankingLoss
16
+ - loss:FlopsLoss
17
+ base_model: distilbert/distilbert-base-uncased
18
+ widget:
19
+ - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
20
+ continue to take somewhat differing stances on regional conflicts such the Yemeni
21
+ Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
22
+ which has fought against Saudi-backed forces, and the Syrian Civil War, where
23
+ the UAE has disagreed with Saudi support for Islamist movements.[4]
24
+ - text: Economy of New Zealand New Zealand's diverse market economy has a sizable
25
+ service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
26
+ manufacturing industries include aluminium production, food processing, metal
27
+ fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
28
+ water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
29
+ sector continues to dominate New Zealand's exports, despite accounting for 6.5%
30
+ of GDP in 2013.[17]
31
+ - text: who was the first president of indian science congress meeting held in kolkata
32
+ in 1914
33
+ - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
34
+ a single after a fourteen-year breakup. It was also the first song written by
35
+ bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
36
+ played live for the first time during their Hell Freezes Over tour in 1994. It
37
+ returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
38
+ No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
39
+ Rock Tracks chart. The song was not played live by the Eagles after the "Hell
40
+ Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
41
+ - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
42
+ who is considered by Christians to be one of the first Gentiles to convert to
43
+ the faith, as related in Acts of the Apostles.'
44
+ datasets:
45
+ - sentence-transformers/natural-questions
46
+ pipeline_tag: feature-extraction
47
+ library_name: sentence-transformers
48
+ metrics:
49
+ - dot_accuracy@1
50
+ - dot_accuracy@3
51
+ - dot_accuracy@5
52
+ - dot_accuracy@10
53
+ - dot_precision@1
54
+ - dot_precision@3
55
+ - dot_precision@5
56
+ - dot_precision@10
57
+ - dot_recall@1
58
+ - dot_recall@3
59
+ - dot_recall@5
60
+ - dot_recall@10
61
+ - dot_ndcg@10
62
+ - dot_mrr@10
63
+ - dot_map@100
64
+ - query_active_dims
65
+ - query_sparsity_ratio
66
+ - corpus_active_dims
67
+ - corpus_sparsity_ratio
68
+ model-index:
69
+ - name: Inference-free SPLADE distilbert-base-uncased trained on Natural-Questions
70
+ tuples
71
+ results:
72
+ - task:
73
+ type: sparse-information-retrieval
74
+ name: Sparse Information Retrieval
75
+ dataset:
76
+ name: NanoMSMARCO
77
+ type: NanoMSMARCO
78
+ metrics:
79
+ - type: dot_accuracy@1
80
+ value: 0.3
81
+ name: Dot Accuracy@1
82
+ - type: dot_accuracy@3
83
+ value: 0.56
84
+ name: Dot Accuracy@3
85
+ - type: dot_accuracy@5
86
+ value: 0.62
87
+ name: Dot Accuracy@5
88
+ - type: dot_accuracy@10
89
+ value: 0.78
90
+ name: Dot Accuracy@10
91
+ - type: dot_precision@1
92
+ value: 0.3
93
+ name: Dot Precision@1
94
+ - type: dot_precision@3
95
+ value: 0.18666666666666668
96
+ name: Dot Precision@3
97
+ - type: dot_precision@5
98
+ value: 0.124
99
+ name: Dot Precision@5
100
+ - type: dot_precision@10
101
+ value: 0.078
102
+ name: Dot Precision@10
103
+ - type: dot_recall@1
104
+ value: 0.3
105
+ name: Dot Recall@1
106
+ - type: dot_recall@3
107
+ value: 0.56
108
+ name: Dot Recall@3
109
+ - type: dot_recall@5
110
+ value: 0.62
111
+ name: Dot Recall@5
112
+ - type: dot_recall@10
113
+ value: 0.78
114
+ name: Dot Recall@10
115
+ - type: dot_ndcg@10
116
+ value: 0.5334479218312598
117
+ name: Dot Ndcg@10
118
+ - type: dot_mrr@10
119
+ value: 0.45579365079365075
120
+ name: Dot Mrr@10
121
+ - type: dot_map@100
122
+ value: 0.46561802519420487
123
+ name: Dot Map@100
124
+ - type: query_active_dims
125
+ value: 6.380000114440918
126
+ name: Query Active Dims
127
+ - type: query_sparsity_ratio
128
+ value: 0.9997909704437966
129
+ name: Query Sparsity Ratio
130
+ - type: corpus_active_dims
131
+ value: 61.24806594848633
132
+ name: Corpus Active Dims
133
+ - type: corpus_sparsity_ratio
134
+ value: 0.997993314135755
135
+ name: Corpus Sparsity Ratio
136
+ - task:
137
+ type: sparse-information-retrieval
138
+ name: Sparse Information Retrieval
139
+ dataset:
140
+ name: NanoNFCorpus
141
+ type: NanoNFCorpus
142
+ metrics:
143
+ - type: dot_accuracy@1
144
+ value: 0.4
145
+ name: Dot Accuracy@1
146
+ - type: dot_accuracy@3
147
+ value: 0.48
148
+ name: Dot Accuracy@3
149
+ - type: dot_accuracy@5
150
+ value: 0.52
151
+ name: Dot Accuracy@5
152
+ - type: dot_accuracy@10
153
+ value: 0.6
154
+ name: Dot Accuracy@10
155
+ - type: dot_precision@1
156
+ value: 0.4
157
+ name: Dot Precision@1
158
+ - type: dot_precision@3
159
+ value: 0.3466666666666666
160
+ name: Dot Precision@3
161
+ - type: dot_precision@5
162
+ value: 0.3
163
+ name: Dot Precision@5
164
+ - type: dot_precision@10
165
+ value: 0.23800000000000002
166
+ name: Dot Precision@10
167
+ - type: dot_recall@1
168
+ value: 0.040488303582306345
169
+ name: Dot Recall@1
170
+ - type: dot_recall@3
171
+ value: 0.07189040859931932
172
+ name: Dot Recall@3
173
+ - type: dot_recall@5
174
+ value: 0.08701508628551448
175
+ name: Dot Recall@5
176
+ - type: dot_recall@10
177
+ value: 0.11292062654625955
178
+ name: Dot Recall@10
179
+ - type: dot_ndcg@10
180
+ value: 0.3021858396265333
181
+ name: Dot Ndcg@10
182
+ - type: dot_mrr@10
183
+ value: 0.45805555555555555
184
+ name: Dot Mrr@10
185
+ - type: dot_map@100
186
+ value: 0.13275153404198187
187
+ name: Dot Map@100
188
+ - type: query_active_dims
189
+ value: 4.760000228881836
190
+ name: Query Active Dims
191
+ - type: query_sparsity_ratio
192
+ value: 0.999844046909479
193
+ name: Query Sparsity Ratio
194
+ - type: corpus_active_dims
195
+ value: 77.5289535522461
196
+ name: Corpus Active Dims
197
+ - type: corpus_sparsity_ratio
198
+ value: 0.997459899300431
199
+ name: Corpus Sparsity Ratio
200
+ - task:
201
+ type: sparse-information-retrieval
202
+ name: Sparse Information Retrieval
203
+ dataset:
204
+ name: NanoNQ
205
+ type: NanoNQ
206
+ metrics:
207
+ - type: dot_accuracy@1
208
+ value: 0.34
209
+ name: Dot Accuracy@1
210
+ - type: dot_accuracy@3
211
+ value: 0.54
212
+ name: Dot Accuracy@3
213
+ - type: dot_accuracy@5
214
+ value: 0.64
215
+ name: Dot Accuracy@5
216
+ - type: dot_accuracy@10
217
+ value: 0.76
218
+ name: Dot Accuracy@10
219
+ - type: dot_precision@1
220
+ value: 0.34
221
+ name: Dot Precision@1
222
+ - type: dot_precision@3
223
+ value: 0.18
224
+ name: Dot Precision@3
225
+ - type: dot_precision@5
226
+ value: 0.128
227
+ name: Dot Precision@5
228
+ - type: dot_precision@10
229
+ value: 0.078
230
+ name: Dot Precision@10
231
+ - type: dot_recall@1
232
+ value: 0.34
233
+ name: Dot Recall@1
234
+ - type: dot_recall@3
235
+ value: 0.51
236
+ name: Dot Recall@3
237
+ - type: dot_recall@5
238
+ value: 0.59
239
+ name: Dot Recall@5
240
+ - type: dot_recall@10
241
+ value: 0.71
242
+ name: Dot Recall@10
243
+ - type: dot_ndcg@10
244
+ value: 0.526165293329912
245
+ name: Dot Ndcg@10
246
+ - type: dot_mrr@10
247
+ value: 0.47612698412698407
248
+ name: Dot Mrr@10
249
+ - type: dot_map@100
250
+ value: 0.47036489156683237
251
+ name: Dot Map@100
252
+ - type: query_active_dims
253
+ value: 9.4399995803833
254
+ name: Query Active Dims
255
+ - type: query_sparsity_ratio
256
+ value: 0.9996907149079227
257
+ name: Query Sparsity Ratio
258
+ - type: corpus_active_dims
259
+ value: 54.611717224121094
260
+ name: Corpus Active Dims
261
+ - type: corpus_sparsity_ratio
262
+ value: 0.9982107425062539
263
+ name: Corpus Sparsity Ratio
264
+ - task:
265
+ type: sparse-nano-beir
266
+ name: Sparse Nano BEIR
267
+ dataset:
268
+ name: NanoBEIR mean
269
+ type: NanoBEIR_mean
270
+ metrics:
271
+ - type: dot_accuracy@1
272
+ value: 0.3466666666666667
273
+ name: Dot Accuracy@1
274
+ - type: dot_accuracy@3
275
+ value: 0.5266666666666667
276
+ name: Dot Accuracy@3
277
+ - type: dot_accuracy@5
278
+ value: 0.5933333333333334
279
+ name: Dot Accuracy@5
280
+ - type: dot_accuracy@10
281
+ value: 0.7133333333333333
282
+ name: Dot Accuracy@10
283
+ - type: dot_precision@1
284
+ value: 0.3466666666666667
285
+ name: Dot Precision@1
286
+ - type: dot_precision@3
287
+ value: 0.23777777777777778
288
+ name: Dot Precision@3
289
+ - type: dot_precision@5
290
+ value: 0.18400000000000002
291
+ name: Dot Precision@5
292
+ - type: dot_precision@10
293
+ value: 0.13133333333333333
294
+ name: Dot Precision@10
295
+ - type: dot_recall@1
296
+ value: 0.22682943452743545
297
+ name: Dot Recall@1
298
+ - type: dot_recall@3
299
+ value: 0.3806301361997731
300
+ name: Dot Recall@3
301
+ - type: dot_recall@5
302
+ value: 0.43233836209517146
303
+ name: Dot Recall@5
304
+ - type: dot_recall@10
305
+ value: 0.5343068755154198
306
+ name: Dot Recall@10
307
+ - type: dot_ndcg@10
308
+ value: 0.4539330182625683
309
+ name: Dot Ndcg@10
310
+ - type: dot_mrr@10
311
+ value: 0.46332539682539675
312
+ name: Dot Mrr@10
313
+ - type: dot_map@100
314
+ value: 0.3562448169343397
315
+ name: Dot Map@100
316
+ - type: query_active_dims
317
+ value: 6.859999974568685
318
+ name: Query Active Dims
319
+ - type: query_sparsity_ratio
320
+ value: 0.9997752440870661
321
+ name: Query Sparsity Ratio
322
+ - type: corpus_active_dims
323
+ value: 62.37333993104512
324
+ name: Corpus Active Dims
325
+ - type: corpus_sparsity_ratio
326
+ value: 0.9979564464998675
327
+ name: Corpus Sparsity Ratio
328
+ ---
329
+
330
+ # Inference-free SPLADE distilbert-base-uncased trained on Natural-Questions tuples
331
+
332
+ This is a [Asymmetric Inference-free SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
333
+ ## Model Details
334
+
335
+ ### Model Description
336
+ - **Model Type:** Asymmetric Inference-free SPLADE Sparse Encoder
337
+ - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
338
+ - **Maximum Sequence Length:** 512 tokens
339
+ - **Output Dimensionality:** 30522 dimensions
340
+ - **Similarity Function:** Dot Product
341
+ - **Training Dataset:**
342
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
343
+ - **Language:** en
344
+ - **License:** apache-2.0
345
+
346
+ ### Model Sources
347
+
348
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
349
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
350
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
351
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
352
+
353
+ ### Full Model Architecture
354
+
355
+ ```
356
+ SparseEncoder(
357
+ (0): Router(
358
+ (sub_modules): ModuleDict(
359
+ (query): Sequential(
360
+ (0): SparseStaticEmbedding({'frozen': False}, dim=30522, tokenizer=DistilBertTokenizerFast)
361
+ )
362
+ (document): Sequential(
363
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'DistilBertForMaskedLM'})
364
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
365
+ )
366
+ )
367
+ )
368
+ )
369
+ ```
370
+
371
+ ## Usage
372
+
373
+ ### Direct Usage (Sentence Transformers)
374
+
375
+ First install the Sentence Transformers library:
376
+
377
+ ```bash
378
+ pip install -U sentence-transformers
379
+ ```
380
+
381
+ Then you can load this model and run inference.
382
+ ```python
383
+ from sentence_transformers import SparseEncoder
384
+
385
+ # Download from the 🤗 Hub
386
+ model = SparseEncoder("Jgmorenof/inference-free-splade-distilbert-base-uncased-nq")
387
+ # Run inference
388
+ queries = [
389
+ "who is cornelius in the book of acts",
390
+ ]
391
+ documents = [
392
+ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
393
+ "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
394
+ 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
395
+ ]
396
+ query_embeddings = model.encode_query(queries)
397
+ document_embeddings = model.encode_document(documents)
398
+ print(query_embeddings.shape, document_embeddings.shape)
399
+ # [1, 30522] [3, 30522]
400
+
401
+ # Get the similarity scores for the embeddings
402
+ similarities = model.similarity(query_embeddings, document_embeddings)
403
+ print(similarities)
404
+ # tensor([[5.8161, 0.0000, 0.0000]])
405
+ ```
406
+
407
+ <!--
408
+ ### Direct Usage (Transformers)
409
+
410
+ <details><summary>Click to see the direct usage in Transformers</summary>
411
+
412
+ </details>
413
+ -->
414
+
415
+ <!--
416
+ ### Downstream Usage (Sentence Transformers)
417
+
418
+ You can finetune this model on your own dataset.
419
+
420
+ <details><summary>Click to expand</summary>
421
+
422
+ </details>
423
+ -->
424
+
425
+ <!--
426
+ ### Out-of-Scope Use
427
+
428
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
429
+ -->
430
+
431
+ ## Evaluation
432
+
433
+ ### Metrics
434
+
435
+ #### Sparse Information Retrieval
436
+
437
+ * Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
438
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
439
+
440
+ | Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
441
+ |:----------------------|:------------|:-------------|:-----------|
442
+ | dot_accuracy@1 | 0.3 | 0.4 | 0.34 |
443
+ | dot_accuracy@3 | 0.56 | 0.48 | 0.54 |
444
+ | dot_accuracy@5 | 0.62 | 0.52 | 0.64 |
445
+ | dot_accuracy@10 | 0.78 | 0.6 | 0.76 |
446
+ | dot_precision@1 | 0.3 | 0.4 | 0.34 |
447
+ | dot_precision@3 | 0.1867 | 0.3467 | 0.18 |
448
+ | dot_precision@5 | 0.124 | 0.3 | 0.128 |
449
+ | dot_precision@10 | 0.078 | 0.238 | 0.078 |
450
+ | dot_recall@1 | 0.3 | 0.0405 | 0.34 |
451
+ | dot_recall@3 | 0.56 | 0.0719 | 0.51 |
452
+ | dot_recall@5 | 0.62 | 0.087 | 0.59 |
453
+ | dot_recall@10 | 0.78 | 0.1129 | 0.71 |
454
+ | **dot_ndcg@10** | **0.5334** | **0.3022** | **0.5262** |
455
+ | dot_mrr@10 | 0.4558 | 0.4581 | 0.4761 |
456
+ | dot_map@100 | 0.4656 | 0.1328 | 0.4704 |
457
+ | query_active_dims | 6.38 | 4.76 | 9.44 |
458
+ | query_sparsity_ratio | 0.9998 | 0.9998 | 0.9997 |
459
+ | corpus_active_dims | 61.2481 | 77.529 | 54.6117 |
460
+ | corpus_sparsity_ratio | 0.998 | 0.9975 | 0.9982 |
461
+
462
+ #### Sparse Nano BEIR
463
+
464
+ * Dataset: `NanoBEIR_mean`
465
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
466
+ ```json
467
+ {
468
+ "dataset_names": [
469
+ "msmarco",
470
+ "nfcorpus",
471
+ "nq"
472
+ ]
473
+ }
474
+ ```
475
+
476
+ | Metric | Value |
477
+ |:----------------------|:-----------|
478
+ | dot_accuracy@1 | 0.3467 |
479
+ | dot_accuracy@3 | 0.5267 |
480
+ | dot_accuracy@5 | 0.5933 |
481
+ | dot_accuracy@10 | 0.7133 |
482
+ | dot_precision@1 | 0.3467 |
483
+ | dot_precision@3 | 0.2378 |
484
+ | dot_precision@5 | 0.184 |
485
+ | dot_precision@10 | 0.1313 |
486
+ | dot_recall@1 | 0.2268 |
487
+ | dot_recall@3 | 0.3806 |
488
+ | dot_recall@5 | 0.4323 |
489
+ | dot_recall@10 | 0.5343 |
490
+ | **dot_ndcg@10** | **0.4539** |
491
+ | dot_mrr@10 | 0.4633 |
492
+ | dot_map@100 | 0.3562 |
493
+ | query_active_dims | 6.86 |
494
+ | query_sparsity_ratio | 0.9998 |
495
+ | corpus_active_dims | 62.3733 |
496
+ | corpus_sparsity_ratio | 0.998 |
497
+
498
+ <!--
499
+ ## Bias, Risks and Limitations
500
+
501
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
502
+ -->
503
+
504
+ <!--
505
+ ### Recommendations
506
+
507
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
508
+ -->
509
+
510
+ ## Training Details
511
+
512
+ ### Training Dataset
513
+
514
+ #### natural-questions
515
+
516
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
517
+ * Size: 99,000 training samples
518
+ * Columns: <code>query</code> and <code>answer</code>
519
+ * Approximate statistics based on the first 1000 samples:
520
+ | | query | answer |
521
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
522
+ | type | string | string |
523
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
524
+ * Samples:
525
+ | query | answer |
526
+ |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
527
+ | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
528
+ | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
529
+ | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
530
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
531
+ ```json
532
+ {
533
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score', gather_across_devices=False)",
534
+ "document_regularizer_weight": 0.003,
535
+ "query_regularizer_weight": 0
536
+ }
537
+ ```
538
+
539
+ ### Evaluation Dataset
540
+
541
+ #### natural-questions
542
+
543
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
544
+ * Size: 1,000 evaluation samples
545
+ * Columns: <code>query</code> and <code>answer</code>
546
+ * Approximate statistics based on the first 1000 samples:
547
+ | | query | answer |
548
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
549
+ | type | string | string |
550
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
551
+ * Samples:
552
+ | query | answer |
553
+ |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
554
+ | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
555
+ | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
556
+ | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
557
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
558
+ ```json
559
+ {
560
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score', gather_across_devices=False)",
561
+ "document_regularizer_weight": 0.003,
562
+ "query_regularizer_weight": 0
563
+ }
564
+ ```
565
+
566
+ ### Training Hyperparameters
567
+ #### Non-Default Hyperparameters
568
+
569
+ - `eval_strategy`: steps
570
+ - `per_device_train_batch_size`: 16
571
+ - `per_device_eval_batch_size`: 16
572
+ - `learning_rate`: 2e-05
573
+ - `num_train_epochs`: 1
574
+ - `warmup_ratio`: 0.1
575
+ - `fp16`: True
576
+ - `batch_sampler`: no_duplicates
577
+ - `router_mapping`: {'query': 'query', 'answer': 'document'}
578
+
579
+ #### All Hyperparameters
580
+ <details><summary>Click to expand</summary>
581
+
582
+ - `overwrite_output_dir`: False
583
+ - `do_predict`: False
584
+ - `eval_strategy`: steps
585
+ - `prediction_loss_only`: True
586
+ - `per_device_train_batch_size`: 16
587
+ - `per_device_eval_batch_size`: 16
588
+ - `per_gpu_train_batch_size`: None
589
+ - `per_gpu_eval_batch_size`: None
590
+ - `gradient_accumulation_steps`: 1
591
+ - `eval_accumulation_steps`: None
592
+ - `torch_empty_cache_steps`: None
593
+ - `learning_rate`: 2e-05
594
+ - `weight_decay`: 0.0
595
+ - `adam_beta1`: 0.9
596
+ - `adam_beta2`: 0.999
597
+ - `adam_epsilon`: 1e-08
598
+ - `max_grad_norm`: 1.0
599
+ - `num_train_epochs`: 1
600
+ - `max_steps`: -1
601
+ - `lr_scheduler_type`: linear
602
+ - `lr_scheduler_kwargs`: {}
603
+ - `warmup_ratio`: 0.1
604
+ - `warmup_steps`: 0
605
+ - `log_level`: passive
606
+ - `log_level_replica`: warning
607
+ - `log_on_each_node`: True
608
+ - `logging_nan_inf_filter`: True
609
+ - `save_safetensors`: True
610
+ - `save_on_each_node`: False
611
+ - `save_only_model`: False
612
+ - `restore_callback_states_from_checkpoint`: False
613
+ - `no_cuda`: False
614
+ - `use_cpu`: False
615
+ - `use_mps_device`: False
616
+ - `seed`: 42
617
+ - `data_seed`: None
618
+ - `jit_mode_eval`: False
619
+ - `use_ipex`: False
620
+ - `bf16`: False
621
+ - `fp16`: True
622
+ - `fp16_opt_level`: O1
623
+ - `half_precision_backend`: auto
624
+ - `bf16_full_eval`: False
625
+ - `fp16_full_eval`: False
626
+ - `tf32`: None
627
+ - `local_rank`: 0
628
+ - `ddp_backend`: None
629
+ - `tpu_num_cores`: None
630
+ - `tpu_metrics_debug`: False
631
+ - `debug`: []
632
+ - `dataloader_drop_last`: False
633
+ - `dataloader_num_workers`: 0
634
+ - `dataloader_prefetch_factor`: None
635
+ - `past_index`: -1
636
+ - `disable_tqdm`: False
637
+ - `remove_unused_columns`: True
638
+ - `label_names`: None
639
+ - `load_best_model_at_end`: False
640
+ - `ignore_data_skip`: False
641
+ - `fsdp`: []
642
+ - `fsdp_min_num_params`: 0
643
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
644
+ - `fsdp_transformer_layer_cls_to_wrap`: None
645
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
646
+ - `deepspeed`: None
647
+ - `label_smoothing_factor`: 0.0
648
+ - `optim`: adamw_torch_fused
649
+ - `optim_args`: None
650
+ - `adafactor`: False
651
+ - `group_by_length`: False
652
+ - `length_column_name`: length
653
+ - `ddp_find_unused_parameters`: None
654
+ - `ddp_bucket_cap_mb`: None
655
+ - `ddp_broadcast_buffers`: False
656
+ - `dataloader_pin_memory`: True
657
+ - `dataloader_persistent_workers`: False
658
+ - `skip_memory_metrics`: True
659
+ - `use_legacy_prediction_loop`: False
660
+ - `push_to_hub`: False
661
+ - `resume_from_checkpoint`: None
662
+ - `hub_model_id`: None
663
+ - `hub_strategy`: every_save
664
+ - `hub_private_repo`: None
665
+ - `hub_always_push`: False
666
+ - `hub_revision`: None
667
+ - `gradient_checkpointing`: False
668
+ - `gradient_checkpointing_kwargs`: None
669
+ - `include_inputs_for_metrics`: False
670
+ - `include_for_metrics`: []
671
+ - `eval_do_concat_batches`: True
672
+ - `fp16_backend`: auto
673
+ - `push_to_hub_model_id`: None
674
+ - `push_to_hub_organization`: None
675
+ - `mp_parameters`:
676
+ - `auto_find_batch_size`: False
677
+ - `full_determinism`: False
678
+ - `torchdynamo`: None
679
+ - `ray_scope`: last
680
+ - `ddp_timeout`: 1800
681
+ - `torch_compile`: False
682
+ - `torch_compile_backend`: None
683
+ - `torch_compile_mode`: None
684
+ - `include_tokens_per_second`: False
685
+ - `include_num_input_tokens_seen`: False
686
+ - `neftune_noise_alpha`: None
687
+ - `optim_target_modules`: None
688
+ - `batch_eval_metrics`: False
689
+ - `eval_on_start`: False
690
+ - `use_liger_kernel`: False
691
+ - `liger_kernel_config`: None
692
+ - `eval_use_gather_object`: False
693
+ - `average_tokens_across_devices`: False
694
+ - `prompts`: None
695
+ - `batch_sampler`: no_duplicates
696
+ - `multi_dataset_batch_sampler`: proportional
697
+ - `router_mapping`: {'query': 'query', 'answer': 'document'}
698
+ - `learning_rate_mapping`: {}
699
+
700
+ </details>
701
+
702
+ ### Training Logs
703
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
704
+ |:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
705
+ | 0.0323 | 200 | 0.2374 | - | - | - | - | - |
706
+ | 0.0646 | 400 | 0.0873 | - | - | - | - | - |
707
+ | 0.0970 | 600 | 0.0736 | - | - | - | - | - |
708
+ | 0.1293 | 800 | 0.0637 | - | - | - | - | - |
709
+ | 0.1616 | 1000 | 0.066 | 0.0872 | 0.5087 | 0.3291 | 0.4883 | 0.4420 |
710
+ | 0.1939 | 1200 | 0.071 | - | - | - | - | - |
711
+ | 0.2262 | 1400 | 0.0777 | - | - | - | - | - |
712
+ | 0.2586 | 1600 | 0.089 | - | - | - | - | - |
713
+ | 0.2909 | 1800 | 0.0884 | - | - | - | - | - |
714
+ | 0.3232 | 2000 | 0.0887 | 0.1115 | 0.5183 | 0.3107 | 0.4583 | 0.4291 |
715
+ | 0.3555 | 2200 | 0.0916 | - | - | - | - | - |
716
+ | 0.3878 | 2400 | 0.0925 | - | - | - | - | - |
717
+ | 0.4202 | 2600 | 0.089 | - | - | - | - | - |
718
+ | 0.4525 | 2800 | 0.088 | - | - | - | - | - |
719
+ | 0.4848 | 3000 | 0.0837 | 0.1003 | 0.5358 | 0.3103 | 0.5365 | 0.4609 |
720
+ | 0.5171 | 3200 | 0.0825 | - | - | - | - | - |
721
+ | 0.5495 | 3400 | 0.0905 | - | - | - | - | - |
722
+ | 0.5818 | 3600 | 0.0823 | - | - | - | - | - |
723
+ | 0.6141 | 3800 | 0.089 | - | - | - | - | - |
724
+ | 0.6464 | 4000 | 0.0803 | 0.0960 | 0.5002 | 0.3057 | 0.5083 | 0.4381 |
725
+ | 0.6787 | 4200 | 0.0861 | - | - | - | - | - |
726
+ | 0.7111 | 4400 | 0.0798 | - | - | - | - | - |
727
+ | 0.7434 | 4600 | 0.0755 | - | - | - | - | - |
728
+ | 0.7757 | 4800 | 0.0798 | - | - | - | - | - |
729
+ | 0.8080 | 5000 | 0.0779 | 0.0910 | 0.5322 | 0.3009 | 0.5520 | 0.4617 |
730
+ | 0.8403 | 5200 | 0.083 | - | - | - | - | - |
731
+ | 0.8727 | 5400 | 0.078 | - | - | - | - | - |
732
+ | 0.9050 | 5600 | 0.0719 | - | - | - | - | - |
733
+ | 0.9373 | 5800 | 0.0733 | - | - | - | - | - |
734
+ | 0.9696 | 6000 | 0.0761 | 0.0852 | 0.5365 | 0.3051 | 0.5297 | 0.4571 |
735
+ | -1 | -1 | - | - | 0.5334 | 0.3022 | 0.5262 | 0.4539 |
736
+
737
+
738
+ ### Framework Versions
739
+ - Python: 3.12.11
740
+ - Sentence Transformers: 5.1.0
741
+ - Transformers: 4.55.4
742
+ - PyTorch: 2.8.0+cu126
743
+ - Accelerate: 1.10.1
744
+ - Datasets: 4.0.0
745
+ - Tokenizers: 0.21.4
746
+
747
+ ## Citation
748
+
749
+ ### BibTeX
750
+
751
+ #### Sentence Transformers
752
+ ```bibtex
753
+ @inproceedings{reimers-2019-sentence-bert,
754
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
755
+ author = "Reimers, Nils and Gurevych, Iryna",
756
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
757
+ month = "11",
758
+ year = "2019",
759
+ publisher = "Association for Computational Linguistics",
760
+ url = "https://arxiv.org/abs/1908.10084",
761
+ }
762
+ ```
763
+
764
+ #### SpladeLoss
765
+ ```bibtex
766
+ @misc{formal2022distillationhardnegativesampling,
767
+ title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
768
+ author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
769
+ year={2022},
770
+ eprint={2205.04733},
771
+ archivePrefix={arXiv},
772
+ primaryClass={cs.IR},
773
+ url={https://arxiv.org/abs/2205.04733},
774
+ }
775
+ ```
776
+
777
+ #### SparseMultipleNegativesRankingLoss
778
+ ```bibtex
779
+ @misc{henderson2017efficient,
780
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
781
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
782
+ year={2017},
783
+ eprint={1705.00652},
784
+ archivePrefix={arXiv},
785
+ primaryClass={cs.CL}
786
+ }
787
+ ```
788
+
789
+ #### FlopsLoss
790
+ ```bibtex
791
+ @article{paria2020minimizing,
792
+ title={Minimizing flops to learn efficient sparse representations},
793
+ author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
794
+ journal={arXiv preprint arXiv:2004.05665},
795
+ year={2020}
796
+ }
797
+ ```
798
+
799
+ <!--
800
+ ## Glossary
801
+
802
+ *Clearly define terms in order to be accessible across audiences.*
803
+ -->
804
+
805
+ <!--
806
+ ## Model Card Authors
807
+
808
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
809
+ -->
810
+
811
+ <!--
812
+ ## Model Card Contact
813
+
814
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
815
+ -->
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.1.0",
5
+ "transformers": "4.55.4",
6
+ "pytorch": "2.8.0+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
document_0_MLMTransformer/config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertForMaskedLM"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "hidden_dim": 3072,
10
+ "initializer_range": 0.02,
11
+ "max_position_embeddings": 512,
12
+ "model_type": "distilbert",
13
+ "n_heads": 12,
14
+ "n_layers": 6,
15
+ "pad_token_id": 0,
16
+ "qa_dropout": 0.1,
17
+ "seq_classif_dropout": 0.2,
18
+ "sinusoidal_pos_embds": false,
19
+ "tie_weights_": true,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.55.4",
22
+ "vocab_size": 30522
23
+ }
document_0_MLMTransformer/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ffed019e7b4f614489b275d1b9efa098ab4db3bd0d9764b8042ebb2f0f6108
3
+ size 267954768
document_0_MLMTransformer/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
document_0_MLMTransformer/special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
document_0_MLMTransformer/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
document_0_MLMTransformer/tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
document_0_MLMTransformer/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
document_1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": 30522
5
+ }
modules.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Router"
7
+ }
8
+ ]
query_0_SparseStaticEmbedding/config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "frozen": false
3
+ }
query_0_SparseStaticEmbedding/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2d74b2bb69e7b98c2381eab5d5de799859d59b379b178032d12984011d148cd
3
+ size 122168
query_0_SparseStaticEmbedding/special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
query_0_SparseStaticEmbedding/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
query_0_SparseStaticEmbedding/tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
query_0_SparseStaticEmbedding/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
router_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "types": {
3
+ "query_0_SparseStaticEmbedding": "sentence_transformers.sparse_encoder.models.SparseStaticEmbedding.SparseStaticEmbedding",
4
+ "document_0_MLMTransformer": "sentence_transformers.sparse_encoder.models.MLMTransformer.MLMTransformer",
5
+ "document_1_SpladePooling": "sentence_transformers.sparse_encoder.models.SpladePooling.SpladePooling"
6
+ },
7
+ "structure": {
8
+ "query": [
9
+ "query_0_SparseStaticEmbedding"
10
+ ],
11
+ "document": [
12
+ "document_0_MLMTransformer",
13
+ "document_1_SpladePooling"
14
+ ]
15
+ },
16
+ "parameters": {
17
+ "default_route": "document",
18
+ "allow_empty_key": true
19
+ }
20
+ }