tomaarsen HF Staff commited on
Commit
d10d42c
·
verified ·
1 Parent(s): aa9e3a2

Add new SparseEncoder model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_CSRSparsity/config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "input_dim": 1024,
3
+ "hidden_dim": 4096,
4
+ "k": 256,
5
+ "k_aux": 512,
6
+ "normalize": false,
7
+ "dead_threshold": 30
8
+ }
2_CSRSparsity/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e98d16741bf56236f121c97a7572f38fdd82dda4beaf913bab649bc81cd1d7c7
3
+ size 16830864
README.md ADDED
@@ -0,0 +1,1520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - csr
10
+ - generated_from_trainer
11
+ - dataset_size:99000
12
+ - loss:CSRLoss
13
+ - loss:SparseMultipleNegativesRankingLoss
14
+ base_model: mixedbread-ai/mxbai-embed-large-v1
15
+ widget:
16
+ - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
17
+ continue to take somewhat differing stances on regional conflicts such the Yemeni
18
+ Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
19
+ which has fought against Saudi-backed forces, and the Syrian Civil War, where
20
+ the UAE has disagreed with Saudi support for Islamist movements.[4]
21
+ - text: Economy of New Zealand New Zealand's diverse market economy has a sizable
22
+ service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
23
+ manufacturing industries include aluminium production, food processing, metal
24
+ fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
25
+ water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
26
+ sector continues to dominate New Zealand's exports, despite accounting for 6.5%
27
+ of GDP in 2013.[17]
28
+ - text: who was the first president of indian science congress meeting held in kolkata
29
+ in 1914
30
+ - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
31
+ a single after a fourteen-year breakup. It was also the first song written by
32
+ bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
33
+ played live for the first time during their Hell Freezes Over tour in 1994. It
34
+ returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
35
+ No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
36
+ Rock Tracks chart. The song was not played live by the Eagles after the "Hell
37
+ Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
38
+ - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
39
+ who is considered by Christians to be one of the first Gentiles to convert to
40
+ the faith, as related in Acts of the Apostles.'
41
+ datasets:
42
+ - sentence-transformers/natural-questions
43
+ pipeline_tag: feature-extraction
44
+ library_name: sentence-transformers
45
+ metrics:
46
+ - cosine_accuracy@1
47
+ - cosine_accuracy@3
48
+ - cosine_accuracy@5
49
+ - cosine_accuracy@10
50
+ - cosine_precision@1
51
+ - cosine_precision@3
52
+ - cosine_precision@5
53
+ - cosine_precision@10
54
+ - cosine_recall@1
55
+ - cosine_recall@3
56
+ - cosine_recall@5
57
+ - cosine_recall@10
58
+ - cosine_ndcg@10
59
+ - cosine_mrr@10
60
+ - cosine_map@100
61
+ - query_active_dims
62
+ - query_sparsity_ratio
63
+ - corpus_active_dims
64
+ - corpus_sparsity_ratio
65
+ co2_eq_emissions:
66
+ emissions: 56.314104914464366
67
+ energy_consumed: 0.14487732225320263
68
+ source: codecarbon
69
+ training_type: fine-tuning
70
+ on_cloud: false
71
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
72
+ ram_total_size: 31.777088165283203
73
+ hours_used: 0.379
74
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
75
+ model-index:
76
+ - name: Sparse CSR model trained on Natural Questions
77
+ results:
78
+ - task:
79
+ type: sparse-information-retrieval
80
+ name: Sparse Information Retrieval
81
+ dataset:
82
+ name: NanoMSMARCO 4
83
+ type: NanoMSMARCO_4
84
+ metrics:
85
+ - type: cosine_accuracy@1
86
+ value: 0.02
87
+ name: Cosine Accuracy@1
88
+ - type: cosine_accuracy@3
89
+ value: 0.12
90
+ name: Cosine Accuracy@3
91
+ - type: cosine_accuracy@5
92
+ value: 0.18
93
+ name: Cosine Accuracy@5
94
+ - type: cosine_accuracy@10
95
+ value: 0.26
96
+ name: Cosine Accuracy@10
97
+ - type: cosine_precision@1
98
+ value: 0.02
99
+ name: Cosine Precision@1
100
+ - type: cosine_precision@3
101
+ value: 0.039999999999999994
102
+ name: Cosine Precision@3
103
+ - type: cosine_precision@5
104
+ value: 0.036000000000000004
105
+ name: Cosine Precision@5
106
+ - type: cosine_precision@10
107
+ value: 0.026000000000000002
108
+ name: Cosine Precision@10
109
+ - type: cosine_recall@1
110
+ value: 0.02
111
+ name: Cosine Recall@1
112
+ - type: cosine_recall@3
113
+ value: 0.12
114
+ name: Cosine Recall@3
115
+ - type: cosine_recall@5
116
+ value: 0.18
117
+ name: Cosine Recall@5
118
+ - type: cosine_recall@10
119
+ value: 0.26
120
+ name: Cosine Recall@10
121
+ - type: cosine_ndcg@10
122
+ value: 0.13103120560180764
123
+ name: Cosine Ndcg@10
124
+ - type: cosine_mrr@10
125
+ value: 0.09107936507936508
126
+ name: Cosine Mrr@10
127
+ - type: cosine_map@100
128
+ value: 0.10057358250385884
129
+ name: Cosine Map@100
130
+ - type: query_active_dims
131
+ value: 4.0
132
+ name: Query Active Dims
133
+ - type: query_sparsity_ratio
134
+ value: 0.9990234375
135
+ name: Query Sparsity Ratio
136
+ - type: corpus_active_dims
137
+ value: 4.0
138
+ name: Corpus Active Dims
139
+ - type: corpus_sparsity_ratio
140
+ value: 0.9990234375
141
+ name: Corpus Sparsity Ratio
142
+ - task:
143
+ type: sparse-information-retrieval
144
+ name: Sparse Information Retrieval
145
+ dataset:
146
+ name: NanoNQ 4
147
+ type: NanoNQ_4
148
+ metrics:
149
+ - type: cosine_accuracy@1
150
+ value: 0.1
151
+ name: Cosine Accuracy@1
152
+ - type: cosine_accuracy@3
153
+ value: 0.16
154
+ name: Cosine Accuracy@3
155
+ - type: cosine_accuracy@5
156
+ value: 0.2
157
+ name: Cosine Accuracy@5
158
+ - type: cosine_accuracy@10
159
+ value: 0.26
160
+ name: Cosine Accuracy@10
161
+ - type: cosine_precision@1
162
+ value: 0.1
163
+ name: Cosine Precision@1
164
+ - type: cosine_precision@3
165
+ value: 0.05333333333333333
166
+ name: Cosine Precision@3
167
+ - type: cosine_precision@5
168
+ value: 0.04
169
+ name: Cosine Precision@5
170
+ - type: cosine_precision@10
171
+ value: 0.026000000000000002
172
+ name: Cosine Precision@10
173
+ - type: cosine_recall@1
174
+ value: 0.1
175
+ name: Cosine Recall@1
176
+ - type: cosine_recall@3
177
+ value: 0.16
178
+ name: Cosine Recall@3
179
+ - type: cosine_recall@5
180
+ value: 0.19
181
+ name: Cosine Recall@5
182
+ - type: cosine_recall@10
183
+ value: 0.24
184
+ name: Cosine Recall@10
185
+ - type: cosine_ndcg@10
186
+ value: 0.1617581884859466
187
+ name: Cosine Ndcg@10
188
+ - type: cosine_mrr@10
189
+ value: 0.13905555555555554
190
+ name: Cosine Mrr@10
191
+ - type: cosine_map@100
192
+ value: 0.1454920368793091
193
+ name: Cosine Map@100
194
+ - type: query_active_dims
195
+ value: 4.0
196
+ name: Query Active Dims
197
+ - type: query_sparsity_ratio
198
+ value: 0.9990234375
199
+ name: Query Sparsity Ratio
200
+ - type: corpus_active_dims
201
+ value: 4.0
202
+ name: Corpus Active Dims
203
+ - type: corpus_sparsity_ratio
204
+ value: 0.9990234375
205
+ name: Corpus Sparsity Ratio
206
+ - task:
207
+ type: sparse-nano-beir
208
+ name: Sparse Nano BEIR
209
+ dataset:
210
+ name: NanoBEIR mean 4
211
+ type: NanoBEIR_mean_4
212
+ metrics:
213
+ - type: cosine_accuracy@1
214
+ value: 0.060000000000000005
215
+ name: Cosine Accuracy@1
216
+ - type: cosine_accuracy@3
217
+ value: 0.14
218
+ name: Cosine Accuracy@3
219
+ - type: cosine_accuracy@5
220
+ value: 0.19
221
+ name: Cosine Accuracy@5
222
+ - type: cosine_accuracy@10
223
+ value: 0.26
224
+ name: Cosine Accuracy@10
225
+ - type: cosine_precision@1
226
+ value: 0.060000000000000005
227
+ name: Cosine Precision@1
228
+ - type: cosine_precision@3
229
+ value: 0.04666666666666666
230
+ name: Cosine Precision@3
231
+ - type: cosine_precision@5
232
+ value: 0.038000000000000006
233
+ name: Cosine Precision@5
234
+ - type: cosine_precision@10
235
+ value: 0.026000000000000002
236
+ name: Cosine Precision@10
237
+ - type: cosine_recall@1
238
+ value: 0.060000000000000005
239
+ name: Cosine Recall@1
240
+ - type: cosine_recall@3
241
+ value: 0.14
242
+ name: Cosine Recall@3
243
+ - type: cosine_recall@5
244
+ value: 0.185
245
+ name: Cosine Recall@5
246
+ - type: cosine_recall@10
247
+ value: 0.25
248
+ name: Cosine Recall@10
249
+ - type: cosine_ndcg@10
250
+ value: 0.14639469704387714
251
+ name: Cosine Ndcg@10
252
+ - type: cosine_mrr@10
253
+ value: 0.11506746031746032
254
+ name: Cosine Mrr@10
255
+ - type: cosine_map@100
256
+ value: 0.12303280969158396
257
+ name: Cosine Map@100
258
+ - type: query_active_dims
259
+ value: 4.0
260
+ name: Query Active Dims
261
+ - type: query_sparsity_ratio
262
+ value: 0.9990234375
263
+ name: Query Sparsity Ratio
264
+ - type: corpus_active_dims
265
+ value: 4.0
266
+ name: Corpus Active Dims
267
+ - type: corpus_sparsity_ratio
268
+ value: 0.9990234375
269
+ name: Corpus Sparsity Ratio
270
+ - task:
271
+ type: sparse-information-retrieval
272
+ name: Sparse Information Retrieval
273
+ dataset:
274
+ name: NanoMSMARCO 16
275
+ type: NanoMSMARCO_16
276
+ metrics:
277
+ - type: cosine_accuracy@1
278
+ value: 0.14
279
+ name: Cosine Accuracy@1
280
+ - type: cosine_accuracy@3
281
+ value: 0.32
282
+ name: Cosine Accuracy@3
283
+ - type: cosine_accuracy@5
284
+ value: 0.44
285
+ name: Cosine Accuracy@5
286
+ - type: cosine_accuracy@10
287
+ value: 0.62
288
+ name: Cosine Accuracy@10
289
+ - type: cosine_precision@1
290
+ value: 0.14
291
+ name: Cosine Precision@1
292
+ - type: cosine_precision@3
293
+ value: 0.10666666666666665
294
+ name: Cosine Precision@3
295
+ - type: cosine_precision@5
296
+ value: 0.08800000000000001
297
+ name: Cosine Precision@5
298
+ - type: cosine_precision@10
299
+ value: 0.062
300
+ name: Cosine Precision@10
301
+ - type: cosine_recall@1
302
+ value: 0.14
303
+ name: Cosine Recall@1
304
+ - type: cosine_recall@3
305
+ value: 0.32
306
+ name: Cosine Recall@3
307
+ - type: cosine_recall@5
308
+ value: 0.44
309
+ name: Cosine Recall@5
310
+ - type: cosine_recall@10
311
+ value: 0.62
312
+ name: Cosine Recall@10
313
+ - type: cosine_ndcg@10
314
+ value: 0.35227434410844155
315
+ name: Cosine Ndcg@10
316
+ - type: cosine_mrr@10
317
+ value: 0.26915873015873015
318
+ name: Cosine Mrr@10
319
+ - type: cosine_map@100
320
+ value: 0.2834889322403155
321
+ name: Cosine Map@100
322
+ - type: query_active_dims
323
+ value: 16.0
324
+ name: Query Active Dims
325
+ - type: query_sparsity_ratio
326
+ value: 0.99609375
327
+ name: Query Sparsity Ratio
328
+ - type: corpus_active_dims
329
+ value: 16.0
330
+ name: Corpus Active Dims
331
+ - type: corpus_sparsity_ratio
332
+ value: 0.99609375
333
+ name: Corpus Sparsity Ratio
334
+ - task:
335
+ type: sparse-information-retrieval
336
+ name: Sparse Information Retrieval
337
+ dataset:
338
+ name: NanoNQ 16
339
+ type: NanoNQ_16
340
+ metrics:
341
+ - type: cosine_accuracy@1
342
+ value: 0.14
343
+ name: Cosine Accuracy@1
344
+ - type: cosine_accuracy@3
345
+ value: 0.32
346
+ name: Cosine Accuracy@3
347
+ - type: cosine_accuracy@5
348
+ value: 0.42
349
+ name: Cosine Accuracy@5
350
+ - type: cosine_accuracy@10
351
+ value: 0.54
352
+ name: Cosine Accuracy@10
353
+ - type: cosine_precision@1
354
+ value: 0.14
355
+ name: Cosine Precision@1
356
+ - type: cosine_precision@3
357
+ value: 0.10666666666666666
358
+ name: Cosine Precision@3
359
+ - type: cosine_precision@5
360
+ value: 0.084
361
+ name: Cosine Precision@5
362
+ - type: cosine_precision@10
363
+ value: 0.054000000000000006
364
+ name: Cosine Precision@10
365
+ - type: cosine_recall@1
366
+ value: 0.14
367
+ name: Cosine Recall@1
368
+ - type: cosine_recall@3
369
+ value: 0.31
370
+ name: Cosine Recall@3
371
+ - type: cosine_recall@5
372
+ value: 0.4
373
+ name: Cosine Recall@5
374
+ - type: cosine_recall@10
375
+ value: 0.51
376
+ name: Cosine Recall@10
377
+ - type: cosine_ndcg@10
378
+ value: 0.31588504937958484
379
+ name: Cosine Ndcg@10
380
+ - type: cosine_mrr@10
381
+ value: 0.25840476190476186
382
+ name: Cosine Mrr@10
383
+ - type: cosine_map@100
384
+ value: 0.26639173210026346
385
+ name: Cosine Map@100
386
+ - type: query_active_dims
387
+ value: 16.0
388
+ name: Query Active Dims
389
+ - type: query_sparsity_ratio
390
+ value: 0.99609375
391
+ name: Query Sparsity Ratio
392
+ - type: corpus_active_dims
393
+ value: 16.0
394
+ name: Corpus Active Dims
395
+ - type: corpus_sparsity_ratio
396
+ value: 0.99609375
397
+ name: Corpus Sparsity Ratio
398
+ - task:
399
+ type: sparse-nano-beir
400
+ name: Sparse Nano BEIR
401
+ dataset:
402
+ name: NanoBEIR mean 16
403
+ type: NanoBEIR_mean_16
404
+ metrics:
405
+ - type: cosine_accuracy@1
406
+ value: 0.14
407
+ name: Cosine Accuracy@1
408
+ - type: cosine_accuracy@3
409
+ value: 0.32
410
+ name: Cosine Accuracy@3
411
+ - type: cosine_accuracy@5
412
+ value: 0.43
413
+ name: Cosine Accuracy@5
414
+ - type: cosine_accuracy@10
415
+ value: 0.5800000000000001
416
+ name: Cosine Accuracy@10
417
+ - type: cosine_precision@1
418
+ value: 0.14
419
+ name: Cosine Precision@1
420
+ - type: cosine_precision@3
421
+ value: 0.10666666666666666
422
+ name: Cosine Precision@3
423
+ - type: cosine_precision@5
424
+ value: 0.08600000000000001
425
+ name: Cosine Precision@5
426
+ - type: cosine_precision@10
427
+ value: 0.058
428
+ name: Cosine Precision@10
429
+ - type: cosine_recall@1
430
+ value: 0.14
431
+ name: Cosine Recall@1
432
+ - type: cosine_recall@3
433
+ value: 0.315
434
+ name: Cosine Recall@3
435
+ - type: cosine_recall@5
436
+ value: 0.42000000000000004
437
+ name: Cosine Recall@5
438
+ - type: cosine_recall@10
439
+ value: 0.565
440
+ name: Cosine Recall@10
441
+ - type: cosine_ndcg@10
442
+ value: 0.33407969674401317
443
+ name: Cosine Ndcg@10
444
+ - type: cosine_mrr@10
445
+ value: 0.263781746031746
446
+ name: Cosine Mrr@10
447
+ - type: cosine_map@100
448
+ value: 0.27494033217028946
449
+ name: Cosine Map@100
450
+ - type: query_active_dims
451
+ value: 16.0
452
+ name: Query Active Dims
453
+ - type: query_sparsity_ratio
454
+ value: 0.99609375
455
+ name: Query Sparsity Ratio
456
+ - type: corpus_active_dims
457
+ value: 16.0
458
+ name: Corpus Active Dims
459
+ - type: corpus_sparsity_ratio
460
+ value: 0.99609375
461
+ name: Corpus Sparsity Ratio
462
+ - task:
463
+ type: sparse-information-retrieval
464
+ name: Sparse Information Retrieval
465
+ dataset:
466
+ name: NanoMSMARCO 64
467
+ type: NanoMSMARCO_64
468
+ metrics:
469
+ - type: cosine_accuracy@1
470
+ value: 0.42
471
+ name: Cosine Accuracy@1
472
+ - type: cosine_accuracy@3
473
+ value: 0.6
474
+ name: Cosine Accuracy@3
475
+ - type: cosine_accuracy@5
476
+ value: 0.74
477
+ name: Cosine Accuracy@5
478
+ - type: cosine_accuracy@10
479
+ value: 0.78
480
+ name: Cosine Accuracy@10
481
+ - type: cosine_precision@1
482
+ value: 0.42
483
+ name: Cosine Precision@1
484
+ - type: cosine_precision@3
485
+ value: 0.2
486
+ name: Cosine Precision@3
487
+ - type: cosine_precision@5
488
+ value: 0.14800000000000002
489
+ name: Cosine Precision@5
490
+ - type: cosine_precision@10
491
+ value: 0.07800000000000001
492
+ name: Cosine Precision@10
493
+ - type: cosine_recall@1
494
+ value: 0.42
495
+ name: Cosine Recall@1
496
+ - type: cosine_recall@3
497
+ value: 0.6
498
+ name: Cosine Recall@3
499
+ - type: cosine_recall@5
500
+ value: 0.74
501
+ name: Cosine Recall@5
502
+ - type: cosine_recall@10
503
+ value: 0.78
504
+ name: Cosine Recall@10
505
+ - type: cosine_ndcg@10
506
+ value: 0.5989097939719981
507
+ name: Cosine Ndcg@10
508
+ - type: cosine_mrr@10
509
+ value: 0.5405238095238094
510
+ name: Cosine Mrr@10
511
+ - type: cosine_map@100
512
+ value: 0.5485629711673361
513
+ name: Cosine Map@100
514
+ - type: query_active_dims
515
+ value: 64.0
516
+ name: Query Active Dims
517
+ - type: query_sparsity_ratio
518
+ value: 0.984375
519
+ name: Query Sparsity Ratio
520
+ - type: corpus_active_dims
521
+ value: 64.0
522
+ name: Corpus Active Dims
523
+ - type: corpus_sparsity_ratio
524
+ value: 0.984375
525
+ name: Corpus Sparsity Ratio
526
+ - task:
527
+ type: sparse-information-retrieval
528
+ name: Sparse Information Retrieval
529
+ dataset:
530
+ name: NanoNQ 64
531
+ type: NanoNQ_64
532
+ metrics:
533
+ - type: cosine_accuracy@1
534
+ value: 0.36
535
+ name: Cosine Accuracy@1
536
+ - type: cosine_accuracy@3
537
+ value: 0.58
538
+ name: Cosine Accuracy@3
539
+ - type: cosine_accuracy@5
540
+ value: 0.74
541
+ name: Cosine Accuracy@5
542
+ - type: cosine_accuracy@10
543
+ value: 0.78
544
+ name: Cosine Accuracy@10
545
+ - type: cosine_precision@1
546
+ value: 0.36
547
+ name: Cosine Precision@1
548
+ - type: cosine_precision@3
549
+ value: 0.2
550
+ name: Cosine Precision@3
551
+ - type: cosine_precision@5
552
+ value: 0.15200000000000002
553
+ name: Cosine Precision@5
554
+ - type: cosine_precision@10
555
+ value: 0.08199999999999999
556
+ name: Cosine Precision@10
557
+ - type: cosine_recall@1
558
+ value: 0.34
559
+ name: Cosine Recall@1
560
+ - type: cosine_recall@3
561
+ value: 0.54
562
+ name: Cosine Recall@3
563
+ - type: cosine_recall@5
564
+ value: 0.68
565
+ name: Cosine Recall@5
566
+ - type: cosine_recall@10
567
+ value: 0.73
568
+ name: Cosine Recall@10
569
+ - type: cosine_ndcg@10
570
+ value: 0.5401684637852635
571
+ name: Cosine Ndcg@10
572
+ - type: cosine_mrr@10
573
+ value: 0.4945238095238095
574
+ name: Cosine Mrr@10
575
+ - type: cosine_map@100
576
+ value: 0.4792528475589284
577
+ name: Cosine Map@100
578
+ - type: query_active_dims
579
+ value: 64.0
580
+ name: Query Active Dims
581
+ - type: query_sparsity_ratio
582
+ value: 0.984375
583
+ name: Query Sparsity Ratio
584
+ - type: corpus_active_dims
585
+ value: 64.0
586
+ name: Corpus Active Dims
587
+ - type: corpus_sparsity_ratio
588
+ value: 0.984375
589
+ name: Corpus Sparsity Ratio
590
+ - task:
591
+ type: sparse-nano-beir
592
+ name: Sparse Nano BEIR
593
+ dataset:
594
+ name: NanoBEIR mean 64
595
+ type: NanoBEIR_mean_64
596
+ metrics:
597
+ - type: cosine_accuracy@1
598
+ value: 0.39
599
+ name: Cosine Accuracy@1
600
+ - type: cosine_accuracy@3
601
+ value: 0.59
602
+ name: Cosine Accuracy@3
603
+ - type: cosine_accuracy@5
604
+ value: 0.74
605
+ name: Cosine Accuracy@5
606
+ - type: cosine_accuracy@10
607
+ value: 0.78
608
+ name: Cosine Accuracy@10
609
+ - type: cosine_precision@1
610
+ value: 0.39
611
+ name: Cosine Precision@1
612
+ - type: cosine_precision@3
613
+ value: 0.2
614
+ name: Cosine Precision@3
615
+ - type: cosine_precision@5
616
+ value: 0.15000000000000002
617
+ name: Cosine Precision@5
618
+ - type: cosine_precision@10
619
+ value: 0.08
620
+ name: Cosine Precision@10
621
+ - type: cosine_recall@1
622
+ value: 0.38
623
+ name: Cosine Recall@1
624
+ - type: cosine_recall@3
625
+ value: 0.5700000000000001
626
+ name: Cosine Recall@3
627
+ - type: cosine_recall@5
628
+ value: 0.71
629
+ name: Cosine Recall@5
630
+ - type: cosine_recall@10
631
+ value: 0.755
632
+ name: Cosine Recall@10
633
+ - type: cosine_ndcg@10
634
+ value: 0.5695391288786308
635
+ name: Cosine Ndcg@10
636
+ - type: cosine_mrr@10
637
+ value: 0.5175238095238095
638
+ name: Cosine Mrr@10
639
+ - type: cosine_map@100
640
+ value: 0.5139079093631322
641
+ name: Cosine Map@100
642
+ - type: query_active_dims
643
+ value: 64.0
644
+ name: Query Active Dims
645
+ - type: query_sparsity_ratio
646
+ value: 0.984375
647
+ name: Query Sparsity Ratio
648
+ - type: corpus_active_dims
649
+ value: 64.0
650
+ name: Corpus Active Dims
651
+ - type: corpus_sparsity_ratio
652
+ value: 0.984375
653
+ name: Corpus Sparsity Ratio
654
+ - task:
655
+ type: sparse-information-retrieval
656
+ name: Sparse Information Retrieval
657
+ dataset:
658
+ name: NanoMSMARCO 256
659
+ type: NanoMSMARCO_256
660
+ metrics:
661
+ - type: cosine_accuracy@1
662
+ value: 0.44
663
+ name: Cosine Accuracy@1
664
+ - type: cosine_accuracy@3
665
+ value: 0.62
666
+ name: Cosine Accuracy@3
667
+ - type: cosine_accuracy@5
668
+ value: 0.68
669
+ name: Cosine Accuracy@5
670
+ - type: cosine_accuracy@10
671
+ value: 0.82
672
+ name: Cosine Accuracy@10
673
+ - type: cosine_precision@1
674
+ value: 0.44
675
+ name: Cosine Precision@1
676
+ - type: cosine_precision@3
677
+ value: 0.20666666666666667
678
+ name: Cosine Precision@3
679
+ - type: cosine_precision@5
680
+ value: 0.136
681
+ name: Cosine Precision@5
682
+ - type: cosine_precision@10
683
+ value: 0.08199999999999999
684
+ name: Cosine Precision@10
685
+ - type: cosine_recall@1
686
+ value: 0.44
687
+ name: Cosine Recall@1
688
+ - type: cosine_recall@3
689
+ value: 0.62
690
+ name: Cosine Recall@3
691
+ - type: cosine_recall@5
692
+ value: 0.68
693
+ name: Cosine Recall@5
694
+ - type: cosine_recall@10
695
+ value: 0.82
696
+ name: Cosine Recall@10
697
+ - type: cosine_ndcg@10
698
+ value: 0.6219451051635295
699
+ name: Cosine Ndcg@10
700
+ - type: cosine_mrr@10
701
+ value: 0.5601111111111111
702
+ name: Cosine Mrr@10
703
+ - type: cosine_map@100
704
+ value: 0.5703043330639237
705
+ name: Cosine Map@100
706
+ - type: query_active_dims
707
+ value: 256.0
708
+ name: Query Active Dims
709
+ - type: query_sparsity_ratio
710
+ value: 0.9375
711
+ name: Query Sparsity Ratio
712
+ - type: corpus_active_dims
713
+ value: 256.0
714
+ name: Corpus Active Dims
715
+ - type: corpus_sparsity_ratio
716
+ value: 0.9375
717
+ name: Corpus Sparsity Ratio
718
+ - task:
719
+ type: sparse-information-retrieval
720
+ name: Sparse Information Retrieval
721
+ dataset:
722
+ name: NanoNQ 256
723
+ type: NanoNQ_256
724
+ metrics:
725
+ - type: cosine_accuracy@1
726
+ value: 0.56
727
+ name: Cosine Accuracy@1
728
+ - type: cosine_accuracy@3
729
+ value: 0.72
730
+ name: Cosine Accuracy@3
731
+ - type: cosine_accuracy@5
732
+ value: 0.78
733
+ name: Cosine Accuracy@5
734
+ - type: cosine_accuracy@10
735
+ value: 0.86
736
+ name: Cosine Accuracy@10
737
+ - type: cosine_precision@1
738
+ value: 0.56
739
+ name: Cosine Precision@1
740
+ - type: cosine_precision@3
741
+ value: 0.24
742
+ name: Cosine Precision@3
743
+ - type: cosine_precision@5
744
+ value: 0.16
745
+ name: Cosine Precision@5
746
+ - type: cosine_precision@10
747
+ value: 0.092
748
+ name: Cosine Precision@10
749
+ - type: cosine_recall@1
750
+ value: 0.54
751
+ name: Cosine Recall@1
752
+ - type: cosine_recall@3
753
+ value: 0.67
754
+ name: Cosine Recall@3
755
+ - type: cosine_recall@5
756
+ value: 0.72
757
+ name: Cosine Recall@5
758
+ - type: cosine_recall@10
759
+ value: 0.82
760
+ name: Cosine Recall@10
761
+ - type: cosine_ndcg@10
762
+ value: 0.6833794556448974
763
+ name: Cosine Ndcg@10
764
+ - type: cosine_mrr@10
765
+ value: 0.6571349206349205
766
+ name: Cosine Mrr@10
767
+ - type: cosine_map@100
768
+ value: 0.6380047784658768
769
+ name: Cosine Map@100
770
+ - type: query_active_dims
771
+ value: 256.0
772
+ name: Query Active Dims
773
+ - type: query_sparsity_ratio
774
+ value: 0.9375
775
+ name: Query Sparsity Ratio
776
+ - type: corpus_active_dims
777
+ value: 256.0
778
+ name: Corpus Active Dims
779
+ - type: corpus_sparsity_ratio
780
+ value: 0.9375
781
+ name: Corpus Sparsity Ratio
782
+ - task:
783
+ type: sparse-nano-beir
784
+ name: Sparse Nano BEIR
785
+ dataset:
786
+ name: NanoBEIR mean 256
787
+ type: NanoBEIR_mean_256
788
+ metrics:
789
+ - type: cosine_accuracy@1
790
+ value: 0.5
791
+ name: Cosine Accuracy@1
792
+ - type: cosine_accuracy@3
793
+ value: 0.6699999999999999
794
+ name: Cosine Accuracy@3
795
+ - type: cosine_accuracy@5
796
+ value: 0.73
797
+ name: Cosine Accuracy@5
798
+ - type: cosine_accuracy@10
799
+ value: 0.84
800
+ name: Cosine Accuracy@10
801
+ - type: cosine_precision@1
802
+ value: 0.5
803
+ name: Cosine Precision@1
804
+ - type: cosine_precision@3
805
+ value: 0.22333333333333333
806
+ name: Cosine Precision@3
807
+ - type: cosine_precision@5
808
+ value: 0.14800000000000002
809
+ name: Cosine Precision@5
810
+ - type: cosine_precision@10
811
+ value: 0.087
812
+ name: Cosine Precision@10
813
+ - type: cosine_recall@1
814
+ value: 0.49
815
+ name: Cosine Recall@1
816
+ - type: cosine_recall@3
817
+ value: 0.645
818
+ name: Cosine Recall@3
819
+ - type: cosine_recall@5
820
+ value: 0.7
821
+ name: Cosine Recall@5
822
+ - type: cosine_recall@10
823
+ value: 0.82
824
+ name: Cosine Recall@10
825
+ - type: cosine_ndcg@10
826
+ value: 0.6526622804042135
827
+ name: Cosine Ndcg@10
828
+ - type: cosine_mrr@10
829
+ value: 0.6086230158730158
830
+ name: Cosine Mrr@10
831
+ - type: cosine_map@100
832
+ value: 0.6041545557649002
833
+ name: Cosine Map@100
834
+ - type: query_active_dims
835
+ value: 256.0
836
+ name: Query Active Dims
837
+ - type: query_sparsity_ratio
838
+ value: 0.9375
839
+ name: Query Sparsity Ratio
840
+ - type: corpus_active_dims
841
+ value: 256.0
842
+ name: Corpus Active Dims
843
+ - type: corpus_sparsity_ratio
844
+ value: 0.9375
845
+ name: Corpus Sparsity Ratio
846
+ ---
847
+
848
+ # Sparse CSR model trained on Natural Questions
849
+
850
+ This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
851
+ ## Model Details
852
+
853
+ ### Model Description
854
+ - **Model Type:** CSR Sparse Encoder
855
+ - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
856
+ - **Maximum Sequence Length:** 512 tokens
857
+ - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
858
+ - **Similarity Function:** Cosine Similarity
859
+ - **Training Dataset:**
860
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
861
+ - **Language:** en
862
+ - **License:** apache-2.0
863
+
864
+ ### Model Sources
865
+
866
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
867
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
868
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
869
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
870
+
871
+ ### Full Model Architecture
872
+
873
+ ```
874
+ SparseEncoder(
875
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
876
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
877
+ (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
878
+ )
879
+ ```
880
+
881
+ ## Usage
882
+
883
+ ### Direct Usage (Sentence Transformers)
884
+
885
+ First install the Sentence Transformers library:
886
+
887
+ ```bash
888
+ pip install -U sentence-transformers
889
+ ```
890
+
891
+ Then you can load this model and run inference.
892
+ ```python
893
+ from sentence_transformers import SparseEncoder
894
+
895
+ # Download from the 🤗 Hub
896
+ model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-20-gamma-1")
897
+ # Run inference
898
+ queries = [
899
+ "who is cornelius in the book of acts",
900
+ ]
901
+ documents = [
902
+ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
903
+ "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
904
+ 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
905
+ ]
906
+ query_embeddings = model.encode_query(queries)
907
+ document_embeddings = model.encode_document(documents)
908
+ print(query_embeddings.shape, document_embeddings.shape)
909
+ # [1, 4096] [3, 4096]
910
+
911
+ # Get the similarity scores for the embeddings
912
+ similarities = model.similarity(query_embeddings, document_embeddings)
913
+ print(similarities)
914
+ # tensor([[0.7062, 0.2414, 0.2065]])
915
+ ```
916
+
917
+ <!--
918
+ ### Direct Usage (Transformers)
919
+
920
+ <details><summary>Click to see the direct usage in Transformers</summary>
921
+
922
+ </details>
923
+ -->
924
+
925
+ <!--
926
+ ### Downstream Usage (Sentence Transformers)
927
+
928
+ You can finetune this model on your own dataset.
929
+
930
+ <details><summary>Click to expand</summary>
931
+
932
+ </details>
933
+ -->
934
+
935
+ <!--
936
+ ### Out-of-Scope Use
937
+
938
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
939
+ -->
940
+
941
+ ## Evaluation
942
+
943
+ ### Metrics
944
+
945
+ #### Sparse Information Retrieval
946
+
947
+ * Datasets: `NanoMSMARCO_4` and `NanoNQ_4`
948
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
949
+ ```json
950
+ {
951
+ "max_active_dims": 4
952
+ }
953
+ ```
954
+
955
+ | Metric | NanoMSMARCO_4 | NanoNQ_4 |
956
+ |:----------------------|:--------------|:-----------|
957
+ | cosine_accuracy@1 | 0.02 | 0.1 |
958
+ | cosine_accuracy@3 | 0.12 | 0.16 |
959
+ | cosine_accuracy@5 | 0.18 | 0.2 |
960
+ | cosine_accuracy@10 | 0.26 | 0.26 |
961
+ | cosine_precision@1 | 0.02 | 0.1 |
962
+ | cosine_precision@3 | 0.04 | 0.0533 |
963
+ | cosine_precision@5 | 0.036 | 0.04 |
964
+ | cosine_precision@10 | 0.026 | 0.026 |
965
+ | cosine_recall@1 | 0.02 | 0.1 |
966
+ | cosine_recall@3 | 0.12 | 0.16 |
967
+ | cosine_recall@5 | 0.18 | 0.19 |
968
+ | cosine_recall@10 | 0.26 | 0.24 |
969
+ | **cosine_ndcg@10** | **0.131** | **0.1618** |
970
+ | cosine_mrr@10 | 0.0911 | 0.1391 |
971
+ | cosine_map@100 | 0.1006 | 0.1455 |
972
+ | query_active_dims | 4.0 | 4.0 |
973
+ | query_sparsity_ratio | 0.999 | 0.999 |
974
+ | corpus_active_dims | 4.0 | 4.0 |
975
+ | corpus_sparsity_ratio | 0.999 | 0.999 |
976
+
977
+ #### Sparse Nano BEIR
978
+
979
+ * Dataset: `NanoBEIR_mean_4`
980
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
981
+ ```json
982
+ {
983
+ "dataset_names": [
984
+ "msmarco",
985
+ "nq"
986
+ ],
987
+ "max_active_dims": 4
988
+ }
989
+ ```
990
+
991
+ | Metric | Value |
992
+ |:----------------------|:-----------|
993
+ | cosine_accuracy@1 | 0.06 |
994
+ | cosine_accuracy@3 | 0.14 |
995
+ | cosine_accuracy@5 | 0.19 |
996
+ | cosine_accuracy@10 | 0.26 |
997
+ | cosine_precision@1 | 0.06 |
998
+ | cosine_precision@3 | 0.0467 |
999
+ | cosine_precision@5 | 0.038 |
1000
+ | cosine_precision@10 | 0.026 |
1001
+ | cosine_recall@1 | 0.06 |
1002
+ | cosine_recall@3 | 0.14 |
1003
+ | cosine_recall@5 | 0.185 |
1004
+ | cosine_recall@10 | 0.25 |
1005
+ | **cosine_ndcg@10** | **0.1464** |
1006
+ | cosine_mrr@10 | 0.1151 |
1007
+ | cosine_map@100 | 0.123 |
1008
+ | query_active_dims | 4.0 |
1009
+ | query_sparsity_ratio | 0.999 |
1010
+ | corpus_active_dims | 4.0 |
1011
+ | corpus_sparsity_ratio | 0.999 |
1012
+
1013
+ #### Sparse Information Retrieval
1014
+
1015
+ * Datasets: `NanoMSMARCO_16` and `NanoNQ_16`
1016
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
1017
+ ```json
1018
+ {
1019
+ "max_active_dims": 16
1020
+ }
1021
+ ```
1022
+
1023
+ | Metric | NanoMSMARCO_16 | NanoNQ_16 |
1024
+ |:----------------------|:---------------|:-----------|
1025
+ | cosine_accuracy@1 | 0.14 | 0.14 |
1026
+ | cosine_accuracy@3 | 0.32 | 0.32 |
1027
+ | cosine_accuracy@5 | 0.44 | 0.42 |
1028
+ | cosine_accuracy@10 | 0.62 | 0.54 |
1029
+ | cosine_precision@1 | 0.14 | 0.14 |
1030
+ | cosine_precision@3 | 0.1067 | 0.1067 |
1031
+ | cosine_precision@5 | 0.088 | 0.084 |
1032
+ | cosine_precision@10 | 0.062 | 0.054 |
1033
+ | cosine_recall@1 | 0.14 | 0.14 |
1034
+ | cosine_recall@3 | 0.32 | 0.31 |
1035
+ | cosine_recall@5 | 0.44 | 0.4 |
1036
+ | cosine_recall@10 | 0.62 | 0.51 |
1037
+ | **cosine_ndcg@10** | **0.3523** | **0.3159** |
1038
+ | cosine_mrr@10 | 0.2692 | 0.2584 |
1039
+ | cosine_map@100 | 0.2835 | 0.2664 |
1040
+ | query_active_dims | 16.0 | 16.0 |
1041
+ | query_sparsity_ratio | 0.9961 | 0.9961 |
1042
+ | corpus_active_dims | 16.0 | 16.0 |
1043
+ | corpus_sparsity_ratio | 0.9961 | 0.9961 |
1044
+
1045
+ #### Sparse Nano BEIR
1046
+
1047
+ * Dataset: `NanoBEIR_mean_16`
1048
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1049
+ ```json
1050
+ {
1051
+ "dataset_names": [
1052
+ "msmarco",
1053
+ "nq"
1054
+ ],
1055
+ "max_active_dims": 16
1056
+ }
1057
+ ```
1058
+
1059
+ | Metric | Value |
1060
+ |:----------------------|:-----------|
1061
+ | cosine_accuracy@1 | 0.14 |
1062
+ | cosine_accuracy@3 | 0.32 |
1063
+ | cosine_accuracy@5 | 0.43 |
1064
+ | cosine_accuracy@10 | 0.58 |
1065
+ | cosine_precision@1 | 0.14 |
1066
+ | cosine_precision@3 | 0.1067 |
1067
+ | cosine_precision@5 | 0.086 |
1068
+ | cosine_precision@10 | 0.058 |
1069
+ | cosine_recall@1 | 0.14 |
1070
+ | cosine_recall@3 | 0.315 |
1071
+ | cosine_recall@5 | 0.42 |
1072
+ | cosine_recall@10 | 0.565 |
1073
+ | **cosine_ndcg@10** | **0.3341** |
1074
+ | cosine_mrr@10 | 0.2638 |
1075
+ | cosine_map@100 | 0.2749 |
1076
+ | query_active_dims | 16.0 |
1077
+ | query_sparsity_ratio | 0.9961 |
1078
+ | corpus_active_dims | 16.0 |
1079
+ | corpus_sparsity_ratio | 0.9961 |
1080
+
1081
+ #### Sparse Information Retrieval
1082
+
1083
+ * Datasets: `NanoMSMARCO_64` and `NanoNQ_64`
1084
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
1085
+ ```json
1086
+ {
1087
+ "max_active_dims": 64
1088
+ }
1089
+ ```
1090
+
1091
+ | Metric | NanoMSMARCO_64 | NanoNQ_64 |
1092
+ |:----------------------|:---------------|:-----------|
1093
+ | cosine_accuracy@1 | 0.42 | 0.36 |
1094
+ | cosine_accuracy@3 | 0.6 | 0.58 |
1095
+ | cosine_accuracy@5 | 0.74 | 0.74 |
1096
+ | cosine_accuracy@10 | 0.78 | 0.78 |
1097
+ | cosine_precision@1 | 0.42 | 0.36 |
1098
+ | cosine_precision@3 | 0.2 | 0.2 |
1099
+ | cosine_precision@5 | 0.148 | 0.152 |
1100
+ | cosine_precision@10 | 0.078 | 0.082 |
1101
+ | cosine_recall@1 | 0.42 | 0.34 |
1102
+ | cosine_recall@3 | 0.6 | 0.54 |
1103
+ | cosine_recall@5 | 0.74 | 0.68 |
1104
+ | cosine_recall@10 | 0.78 | 0.73 |
1105
+ | **cosine_ndcg@10** | **0.5989** | **0.5402** |
1106
+ | cosine_mrr@10 | 0.5405 | 0.4945 |
1107
+ | cosine_map@100 | 0.5486 | 0.4793 |
1108
+ | query_active_dims | 64.0 | 64.0 |
1109
+ | query_sparsity_ratio | 0.9844 | 0.9844 |
1110
+ | corpus_active_dims | 64.0 | 64.0 |
1111
+ | corpus_sparsity_ratio | 0.9844 | 0.9844 |
1112
+
1113
+ #### Sparse Nano BEIR
1114
+
1115
+ * Dataset: `NanoBEIR_mean_64`
1116
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1117
+ ```json
1118
+ {
1119
+ "dataset_names": [
1120
+ "msmarco",
1121
+ "nq"
1122
+ ],
1123
+ "max_active_dims": 64
1124
+ }
1125
+ ```
1126
+
1127
+ | Metric | Value |
1128
+ |:----------------------|:-----------|
1129
+ | cosine_accuracy@1 | 0.39 |
1130
+ | cosine_accuracy@3 | 0.59 |
1131
+ | cosine_accuracy@5 | 0.74 |
1132
+ | cosine_accuracy@10 | 0.78 |
1133
+ | cosine_precision@1 | 0.39 |
1134
+ | cosine_precision@3 | 0.2 |
1135
+ | cosine_precision@5 | 0.15 |
1136
+ | cosine_precision@10 | 0.08 |
1137
+ | cosine_recall@1 | 0.38 |
1138
+ | cosine_recall@3 | 0.57 |
1139
+ | cosine_recall@5 | 0.71 |
1140
+ | cosine_recall@10 | 0.755 |
1141
+ | **cosine_ndcg@10** | **0.5695** |
1142
+ | cosine_mrr@10 | 0.5175 |
1143
+ | cosine_map@100 | 0.5139 |
1144
+ | query_active_dims | 64.0 |
1145
+ | query_sparsity_ratio | 0.9844 |
1146
+ | corpus_active_dims | 64.0 |
1147
+ | corpus_sparsity_ratio | 0.9844 |
1148
+
1149
+ #### Sparse Information Retrieval
1150
+
1151
+ * Datasets: `NanoMSMARCO_256` and `NanoNQ_256`
1152
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
1153
+ ```json
1154
+ {
1155
+ "max_active_dims": 256
1156
+ }
1157
+ ```
1158
+
1159
+ | Metric | NanoMSMARCO_256 | NanoNQ_256 |
1160
+ |:----------------------|:----------------|:-----------|
1161
+ | cosine_accuracy@1 | 0.44 | 0.56 |
1162
+ | cosine_accuracy@3 | 0.62 | 0.72 |
1163
+ | cosine_accuracy@5 | 0.68 | 0.78 |
1164
+ | cosine_accuracy@10 | 0.82 | 0.86 |
1165
+ | cosine_precision@1 | 0.44 | 0.56 |
1166
+ | cosine_precision@3 | 0.2067 | 0.24 |
1167
+ | cosine_precision@5 | 0.136 | 0.16 |
1168
+ | cosine_precision@10 | 0.082 | 0.092 |
1169
+ | cosine_recall@1 | 0.44 | 0.54 |
1170
+ | cosine_recall@3 | 0.62 | 0.67 |
1171
+ | cosine_recall@5 | 0.68 | 0.72 |
1172
+ | cosine_recall@10 | 0.82 | 0.82 |
1173
+ | **cosine_ndcg@10** | **0.6219** | **0.6834** |
1174
+ | cosine_mrr@10 | 0.5601 | 0.6571 |
1175
+ | cosine_map@100 | 0.5703 | 0.638 |
1176
+ | query_active_dims | 256.0 | 256.0 |
1177
+ | query_sparsity_ratio | 0.9375 | 0.9375 |
1178
+ | corpus_active_dims | 256.0 | 256.0 |
1179
+ | corpus_sparsity_ratio | 0.9375 | 0.9375 |
1180
+
1181
+ #### Sparse Nano BEIR
1182
+
1183
+ * Dataset: `NanoBEIR_mean_256`
1184
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1185
+ ```json
1186
+ {
1187
+ "dataset_names": [
1188
+ "msmarco",
1189
+ "nq"
1190
+ ],
1191
+ "max_active_dims": 256
1192
+ }
1193
+ ```
1194
+
1195
+ | Metric | Value |
1196
+ |:----------------------|:-----------|
1197
+ | cosine_accuracy@1 | 0.5 |
1198
+ | cosine_accuracy@3 | 0.67 |
1199
+ | cosine_accuracy@5 | 0.73 |
1200
+ | cosine_accuracy@10 | 0.84 |
1201
+ | cosine_precision@1 | 0.5 |
1202
+ | cosine_precision@3 | 0.2233 |
1203
+ | cosine_precision@5 | 0.148 |
1204
+ | cosine_precision@10 | 0.087 |
1205
+ | cosine_recall@1 | 0.49 |
1206
+ | cosine_recall@3 | 0.645 |
1207
+ | cosine_recall@5 | 0.7 |
1208
+ | cosine_recall@10 | 0.82 |
1209
+ | **cosine_ndcg@10** | **0.6527** |
1210
+ | cosine_mrr@10 | 0.6086 |
1211
+ | cosine_map@100 | 0.6042 |
1212
+ | query_active_dims | 256.0 |
1213
+ | query_sparsity_ratio | 0.9375 |
1214
+ | corpus_active_dims | 256.0 |
1215
+ | corpus_sparsity_ratio | 0.9375 |
1216
+
1217
+ <!--
1218
+ ## Bias, Risks and Limitations
1219
+
1220
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
1221
+ -->
1222
+
1223
+ <!--
1224
+ ### Recommendations
1225
+
1226
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
1227
+ -->
1228
+
1229
+ ## Training Details
1230
+
1231
+ ### Training Dataset
1232
+
1233
+ #### natural-questions
1234
+
1235
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
1236
+ * Size: 99,000 training samples
1237
+ * Columns: <code>query</code> and <code>answer</code>
1238
+ * Approximate statistics based on the first 1000 samples:
1239
+ | | query | answer |
1240
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
1241
+ | type | string | string |
1242
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
1243
+ * Samples:
1244
+ | query | answer |
1245
+ |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1246
+ | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
1247
+ | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
1248
+ | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
1249
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
1250
+ ```json
1251
+ {
1252
+ "beta": 0.1,
1253
+ "gamma": 1.0,
1254
+ "loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
1255
+ }
1256
+ ```
1257
+
1258
+ ### Evaluation Dataset
1259
+
1260
+ #### natural-questions
1261
+
1262
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
1263
+ * Size: 1,000 evaluation samples
1264
+ * Columns: <code>query</code> and <code>answer</code>
1265
+ * Approximate statistics based on the first 1000 samples:
1266
+ | | query | answer |
1267
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
1268
+ | type | string | string |
1269
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
1270
+ * Samples:
1271
+ | query | answer |
1272
+ |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1273
+ | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
1274
+ | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
1275
+ | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
1276
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
1277
+ ```json
1278
+ {
1279
+ "beta": 0.1,
1280
+ "gamma": 1.0,
1281
+ "loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
1282
+ }
1283
+ ```
1284
+
1285
+ ### Training Hyperparameters
1286
+ #### Non-Default Hyperparameters
1287
+
1288
+ - `eval_strategy`: steps
1289
+ - `per_device_train_batch_size`: 64
1290
+ - `per_device_eval_batch_size`: 64
1291
+ - `learning_rate`: 4e-05
1292
+ - `num_train_epochs`: 1
1293
+ - `bf16`: True
1294
+ - `load_best_model_at_end`: True
1295
+ - `batch_sampler`: no_duplicates
1296
+
1297
+ #### All Hyperparameters
1298
+ <details><summary>Click to expand</summary>
1299
+
1300
+ - `overwrite_output_dir`: False
1301
+ - `do_predict`: False
1302
+ - `eval_strategy`: steps
1303
+ - `prediction_loss_only`: True
1304
+ - `per_device_train_batch_size`: 64
1305
+ - `per_device_eval_batch_size`: 64
1306
+ - `per_gpu_train_batch_size`: None
1307
+ - `per_gpu_eval_batch_size`: None
1308
+ - `gradient_accumulation_steps`: 1
1309
+ - `eval_accumulation_steps`: None
1310
+ - `torch_empty_cache_steps`: None
1311
+ - `learning_rate`: 4e-05
1312
+ - `weight_decay`: 0.0
1313
+ - `adam_beta1`: 0.9
1314
+ - `adam_beta2`: 0.999
1315
+ - `adam_epsilon`: 1e-08
1316
+ - `max_grad_norm`: 1.0
1317
+ - `num_train_epochs`: 1
1318
+ - `max_steps`: -1
1319
+ - `lr_scheduler_type`: linear
1320
+ - `lr_scheduler_kwargs`: {}
1321
+ - `warmup_ratio`: 0.0
1322
+ - `warmup_steps`: 0
1323
+ - `log_level`: passive
1324
+ - `log_level_replica`: warning
1325
+ - `log_on_each_node`: True
1326
+ - `logging_nan_inf_filter`: True
1327
+ - `save_safetensors`: True
1328
+ - `save_on_each_node`: False
1329
+ - `save_only_model`: False
1330
+ - `restore_callback_states_from_checkpoint`: False
1331
+ - `no_cuda`: False
1332
+ - `use_cpu`: False
1333
+ - `use_mps_device`: False
1334
+ - `seed`: 42
1335
+ - `data_seed`: None
1336
+ - `jit_mode_eval`: False
1337
+ - `use_ipex`: False
1338
+ - `bf16`: True
1339
+ - `fp16`: False
1340
+ - `fp16_opt_level`: O1
1341
+ - `half_precision_backend`: auto
1342
+ - `bf16_full_eval`: False
1343
+ - `fp16_full_eval`: False
1344
+ - `tf32`: None
1345
+ - `local_rank`: 0
1346
+ - `ddp_backend`: None
1347
+ - `tpu_num_cores`: None
1348
+ - `tpu_metrics_debug`: False
1349
+ - `debug`: []
1350
+ - `dataloader_drop_last`: False
1351
+ - `dataloader_num_workers`: 0
1352
+ - `dataloader_prefetch_factor`: None
1353
+ - `past_index`: -1
1354
+ - `disable_tqdm`: False
1355
+ - `remove_unused_columns`: True
1356
+ - `label_names`: None
1357
+ - `load_best_model_at_end`: True
1358
+ - `ignore_data_skip`: False
1359
+ - `fsdp`: []
1360
+ - `fsdp_min_num_params`: 0
1361
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
1362
+ - `fsdp_transformer_layer_cls_to_wrap`: None
1363
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
1364
+ - `deepspeed`: None
1365
+ - `label_smoothing_factor`: 0.0
1366
+ - `optim`: adamw_torch
1367
+ - `optim_args`: None
1368
+ - `adafactor`: False
1369
+ - `group_by_length`: False
1370
+ - `length_column_name`: length
1371
+ - `ddp_find_unused_parameters`: None
1372
+ - `ddp_bucket_cap_mb`: None
1373
+ - `ddp_broadcast_buffers`: False
1374
+ - `dataloader_pin_memory`: True
1375
+ - `dataloader_persistent_workers`: False
1376
+ - `skip_memory_metrics`: True
1377
+ - `use_legacy_prediction_loop`: False
1378
+ - `push_to_hub`: False
1379
+ - `resume_from_checkpoint`: None
1380
+ - `hub_model_id`: None
1381
+ - `hub_strategy`: every_save
1382
+ - `hub_private_repo`: None
1383
+ - `hub_always_push`: False
1384
+ - `gradient_checkpointing`: False
1385
+ - `gradient_checkpointing_kwargs`: None
1386
+ - `include_inputs_for_metrics`: False
1387
+ - `include_for_metrics`: []
1388
+ - `eval_do_concat_batches`: True
1389
+ - `fp16_backend`: auto
1390
+ - `push_to_hub_model_id`: None
1391
+ - `push_to_hub_organization`: None
1392
+ - `mp_parameters`:
1393
+ - `auto_find_batch_size`: False
1394
+ - `full_determinism`: False
1395
+ - `torchdynamo`: None
1396
+ - `ray_scope`: last
1397
+ - `ddp_timeout`: 1800
1398
+ - `torch_compile`: False
1399
+ - `torch_compile_backend`: None
1400
+ - `torch_compile_mode`: None
1401
+ - `include_tokens_per_second`: False
1402
+ - `include_num_input_tokens_seen`: False
1403
+ - `neftune_noise_alpha`: None
1404
+ - `optim_target_modules`: None
1405
+ - `batch_eval_metrics`: False
1406
+ - `eval_on_start`: False
1407
+ - `use_liger_kernel`: False
1408
+ - `eval_use_gather_object`: False
1409
+ - `average_tokens_across_devices`: False
1410
+ - `prompts`: None
1411
+ - `batch_sampler`: no_duplicates
1412
+ - `multi_dataset_batch_sampler`: proportional
1413
+ - `router_mapping`: {}
1414
+ - `learning_rate_mapping`: {}
1415
+
1416
+ </details>
1417
+
1418
+ ### Training Logs
1419
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_4_cosine_ndcg@10 | NanoNQ_4_cosine_ndcg@10 | NanoBEIR_mean_4_cosine_ndcg@10 | NanoMSMARCO_16_cosine_ndcg@10 | NanoNQ_16_cosine_ndcg@10 | NanoBEIR_mean_16_cosine_ndcg@10 | NanoMSMARCO_64_cosine_ndcg@10 | NanoNQ_64_cosine_ndcg@10 | NanoBEIR_mean_64_cosine_ndcg@10 | NanoMSMARCO_256_cosine_ndcg@10 | NanoNQ_256_cosine_ndcg@10 | NanoBEIR_mean_256_cosine_ndcg@10 |
1420
+ |:----------:|:-------:|:-------------:|:---------------:|:----------------------------:|:-----------------------:|:------------------------------:|:-----------------------------:|:------------------------:|:-------------------------------:|:-----------------------------:|:------------------------:|:-------------------------------:|:------------------------------:|:-------------------------:|:--------------------------------:|
1421
+ | -1 | -1 | - | - | 0.0850 | 0.1222 | 0.1036 | 0.4256 | 0.3267 | 0.3761 | 0.5827 | 0.5843 | 0.5835 | 0.5987 | 0.7005 | 0.6496 |
1422
+ | 0.0646 | 100 | 0.6568 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1423
+ | 0.1293 | 200 | 0.561 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1424
+ | **0.1939** | **300** | **0.5248** | **0.4118** | **0.131** | **0.1618** | **0.1464** | **0.3523** | **0.3159** | **0.3341** | **0.5989** | **0.5402** | **0.5695** | **0.6219** | **0.6834** | **0.6527** |
1425
+ | 0.2586 | 400 | 0.4995 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1426
+ | 0.3232 | 500 | 0.484 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1427
+ | 0.3878 | 600 | 0.4773 | 0.3882 | 0.2023 | 0.1465 | 0.1744 | 0.3397 | 0.3617 | 0.3507 | 0.5710 | 0.5702 | 0.5706 | 0.6091 | 0.6610 | 0.6351 |
1428
+ | 0.4525 | 700 | 0.464 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1429
+ | 0.5171 | 800 | 0.4529 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1430
+ | 0.5818 | 900 | 0.4524 | 0.3753 | 0.1495 | 0.1179 | 0.1337 | 0.3072 | 0.3473 | 0.3272 | 0.5718 | 0.5525 | 0.5622 | 0.6084 | 0.6660 | 0.6372 |
1431
+ | 0.6464 | 1000 | 0.4486 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1432
+ | 0.7111 | 1100 | 0.4349 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1433
+ | 0.7757 | 1200 | 0.4382 | 0.3690 | 0.1815 | 0.0924 | 0.1370 | 0.3328 | 0.3493 | 0.3410 | 0.5311 | 0.5480 | 0.5396 | 0.6086 | 0.6486 | 0.6286 |
1434
+ | 0.8403 | 1300 | 0.4394 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1435
+ | 0.9050 | 1400 | 0.427 | - | - | - | - | - | - | - | - | - | - | - | - | - |
1436
+ | 0.9696 | 1500 | 0.4312 | 0.3666 | 0.1746 | 0.1350 | 0.1548 | 0.3395 | 0.2952 | 0.3174 | 0.5511 | 0.5252 | 0.5381 | 0.6162 | 0.6494 | 0.6328 |
1437
+ | -1 | -1 | - | - | 0.1310 | 0.1618 | 0.1464 | 0.3523 | 0.3159 | 0.3341 | 0.5989 | 0.5402 | 0.5695 | 0.6219 | 0.6834 | 0.6527 |
1438
+
1439
+ * The bold row denotes the saved checkpoint.
1440
+
1441
+ ### Environmental Impact
1442
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
1443
+ - **Energy Consumed**: 0.145 kWh
1444
+ - **Carbon Emitted**: 0.056 kg of CO2
1445
+ - **Hours Used**: 0.379 hours
1446
+
1447
+ ### Training Hardware
1448
+ - **On Cloud**: No
1449
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
1450
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
1451
+ - **RAM Size**: 31.78 GB
1452
+
1453
+ ### Framework Versions
1454
+ - Python: 3.11.6
1455
+ - Sentence Transformers: 4.2.0.dev0
1456
+ - Transformers: 4.52.4
1457
+ - PyTorch: 2.6.0+cu124
1458
+ - Accelerate: 1.5.1
1459
+ - Datasets: 2.21.0
1460
+ - Tokenizers: 0.21.1
1461
+
1462
+ ## Citation
1463
+
1464
+ ### BibTeX
1465
+
1466
+ #### Sentence Transformers
1467
+ ```bibtex
1468
+ @inproceedings{reimers-2019-sentence-bert,
1469
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1470
+ author = "Reimers, Nils and Gurevych, Iryna",
1471
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1472
+ month = "11",
1473
+ year = "2019",
1474
+ publisher = "Association for Computational Linguistics",
1475
+ url = "https://arxiv.org/abs/1908.10084",
1476
+ }
1477
+ ```
1478
+
1479
+ #### CSRLoss
1480
+ ```bibtex
1481
+ @misc{wen2025matryoshkarevisitingsparsecoding,
1482
+ title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
1483
+ author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
1484
+ year={2025},
1485
+ eprint={2503.01776},
1486
+ archivePrefix={arXiv},
1487
+ primaryClass={cs.LG},
1488
+ url={https://arxiv.org/abs/2503.01776},
1489
+ }
1490
+ ```
1491
+
1492
+ #### SparseMultipleNegativesRankingLoss
1493
+ ```bibtex
1494
+ @misc{henderson2017efficient,
1495
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1496
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1497
+ year={2017},
1498
+ eprint={1705.00652},
1499
+ archivePrefix={arXiv},
1500
+ primaryClass={cs.CL}
1501
+ }
1502
+ ```
1503
+
1504
+ <!--
1505
+ ## Glossary
1506
+
1507
+ *Clearly define terms in order to be accessible across audiences.*
1508
+ -->
1509
+
1510
+ <!--
1511
+ ## Model Card Authors
1512
+
1513
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1514
+ -->
1515
+
1516
+ <!--
1517
+ ## Model Card Contact
1518
+
1519
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1520
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.4",
22
+ "type_vocab_size": 2,
23
+ "use_cache": false,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.2.0.dev0",
4
+ "transformers": "4.52.4",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: ",
9
+ "document": "",
10
+ "passage": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "model_type": "SparseEncoder",
14
+ "similarity_fn_name": "cosine"
15
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e86b2a89f7f8933cf7bd90586cdf69d0012140e412818234b234f807e51ee574
3
+ size 1340612432
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_CSRSparsity",
18
+ "type": "sentence_transformers.sparse_encoder.models.CSRSparsity"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff