tomaarsen HF Staff commited on
Commit
baa6396
·
verified ·
1 Parent(s): e7b2da0

Add new SparseEncoder model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_CSRSparsity/config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "input_dim": 1024,
3
+ "hidden_dim": 4096,
4
+ "k": 256,
5
+ "k_aux": 512,
6
+ "normalize": false,
7
+ "dead_threshold": 30
8
+ }
2_CSRSparsity/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ca7af4f20124982838e7c581a076a4684963f356c0e703264fe4b67fb8627de
3
+ size 16830864
README.md ADDED
@@ -0,0 +1,1150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - csr
10
+ - generated_from_trainer
11
+ - dataset_size:99000
12
+ - loss:CSRLoss
13
+ - loss:SparseMultipleNegativesRankingLoss
14
+ base_model: mixedbread-ai/mxbai-embed-large-v1
15
+ widget:
16
+ - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
17
+ continue to take somewhat differing stances on regional conflicts such the Yemeni
18
+ Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
19
+ which has fought against Saudi-backed forces, and the Syrian Civil War, where
20
+ the UAE has disagreed with Saudi support for Islamist movements.[4]
21
+ - text: Economy of New Zealand New Zealand's diverse market economy has a sizable
22
+ service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
23
+ manufacturing industries include aluminium production, food processing, metal
24
+ fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
25
+ water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
26
+ sector continues to dominate New Zealand's exports, despite accounting for 6.5%
27
+ of GDP in 2013.[17]
28
+ - text: who was the first president of indian science congress meeting held in kolkata
29
+ in 1914
30
+ - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
31
+ a single after a fourteen-year breakup. It was also the first song written by
32
+ bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
33
+ played live for the first time during their Hell Freezes Over tour in 1994. It
34
+ returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
35
+ No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
36
+ Rock Tracks chart. The song was not played live by the Eagles after the "Hell
37
+ Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
38
+ - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
39
+ who is considered by Christians to be one of the first Gentiles to convert to
40
+ the faith, as related in Acts of the Apostles.'
41
+ datasets:
42
+ - sentence-transformers/natural-questions
43
+ pipeline_tag: feature-extraction
44
+ library_name: sentence-transformers
45
+ metrics:
46
+ - cosine_accuracy@1
47
+ - cosine_accuracy@3
48
+ - cosine_accuracy@5
49
+ - cosine_accuracy@10
50
+ - cosine_precision@1
51
+ - cosine_precision@3
52
+ - cosine_precision@5
53
+ - cosine_precision@10
54
+ - cosine_recall@1
55
+ - cosine_recall@3
56
+ - cosine_recall@5
57
+ - cosine_recall@10
58
+ - cosine_ndcg@10
59
+ - cosine_mrr@10
60
+ - cosine_map@100
61
+ - query_active_dims
62
+ - query_sparsity_ratio
63
+ - corpus_active_dims
64
+ - corpus_sparsity_ratio
65
+ co2_eq_emissions:
66
+ emissions: 42.81821457704325
67
+ energy_consumed: 0.11015691860871116
68
+ source: codecarbon
69
+ training_type: fine-tuning
70
+ on_cloud: false
71
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
72
+ ram_total_size: 31.777088165283203
73
+ hours_used: 0.274
74
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
75
+ model-index:
76
+ - name: Sparse CSR model trained on Natural Questions
77
+ results:
78
+ - task:
79
+ type: sparse-information-retrieval
80
+ name: Sparse Information Retrieval
81
+ dataset:
82
+ name: nq eval 4
83
+ type: nq_eval_4
84
+ metrics:
85
+ - type: cosine_accuracy@1
86
+ value: 0.341
87
+ name: Cosine Accuracy@1
88
+ - type: cosine_accuracy@3
89
+ value: 0.53
90
+ name: Cosine Accuracy@3
91
+ - type: cosine_accuracy@5
92
+ value: 0.616
93
+ name: Cosine Accuracy@5
94
+ - type: cosine_accuracy@10
95
+ value: 0.71
96
+ name: Cosine Accuracy@10
97
+ - type: cosine_precision@1
98
+ value: 0.341
99
+ name: Cosine Precision@1
100
+ - type: cosine_precision@3
101
+ value: 0.1766666666666667
102
+ name: Cosine Precision@3
103
+ - type: cosine_precision@5
104
+ value: 0.12319999999999999
105
+ name: Cosine Precision@5
106
+ - type: cosine_precision@10
107
+ value: 0.071
108
+ name: Cosine Precision@10
109
+ - type: cosine_recall@1
110
+ value: 0.341
111
+ name: Cosine Recall@1
112
+ - type: cosine_recall@3
113
+ value: 0.53
114
+ name: Cosine Recall@3
115
+ - type: cosine_recall@5
116
+ value: 0.616
117
+ name: Cosine Recall@5
118
+ - type: cosine_recall@10
119
+ value: 0.71
120
+ name: Cosine Recall@10
121
+ - type: cosine_ndcg@10
122
+ value: 0.5177559532868556
123
+ name: Cosine Ndcg@10
124
+ - type: cosine_mrr@10
125
+ value: 0.4569571428571428
126
+ name: Cosine Mrr@10
127
+ - type: cosine_map@100
128
+ value: 0.46808238304226085
129
+ name: Cosine Map@100
130
+ - type: query_active_dims
131
+ value: 4.0
132
+ name: Query Active Dims
133
+ - type: query_sparsity_ratio
134
+ value: 0.9990234375
135
+ name: Query Sparsity Ratio
136
+ - type: corpus_active_dims
137
+ value: 4.0
138
+ name: Corpus Active Dims
139
+ - type: corpus_sparsity_ratio
140
+ value: 0.9990234375
141
+ name: Corpus Sparsity Ratio
142
+ - task:
143
+ type: sparse-information-retrieval
144
+ name: Sparse Information Retrieval
145
+ dataset:
146
+ name: nq eval 8
147
+ type: nq_eval_8
148
+ metrics:
149
+ - type: cosine_accuracy@1
150
+ value: 0.479
151
+ name: Cosine Accuracy@1
152
+ - type: cosine_accuracy@3
153
+ value: 0.683
154
+ name: Cosine Accuracy@3
155
+ - type: cosine_accuracy@5
156
+ value: 0.743
157
+ name: Cosine Accuracy@5
158
+ - type: cosine_accuracy@10
159
+ value: 0.827
160
+ name: Cosine Accuracy@10
161
+ - type: cosine_precision@1
162
+ value: 0.479
163
+ name: Cosine Precision@1
164
+ - type: cosine_precision@3
165
+ value: 0.22766666666666666
166
+ name: Cosine Precision@3
167
+ - type: cosine_precision@5
168
+ value: 0.14859999999999998
169
+ name: Cosine Precision@5
170
+ - type: cosine_precision@10
171
+ value: 0.08270000000000001
172
+ name: Cosine Precision@10
173
+ - type: cosine_recall@1
174
+ value: 0.479
175
+ name: Cosine Recall@1
176
+ - type: cosine_recall@3
177
+ value: 0.683
178
+ name: Cosine Recall@3
179
+ - type: cosine_recall@5
180
+ value: 0.743
181
+ name: Cosine Recall@5
182
+ - type: cosine_recall@10
183
+ value: 0.827
184
+ name: Cosine Recall@10
185
+ - type: cosine_ndcg@10
186
+ value: 0.6514732993360963
187
+ name: Cosine Ndcg@10
188
+ - type: cosine_mrr@10
189
+ value: 0.5954253968253969
190
+ name: Cosine Mrr@10
191
+ - type: cosine_map@100
192
+ value: 0.602459158736598
193
+ name: Cosine Map@100
194
+ - type: query_active_dims
195
+ value: 8.0
196
+ name: Query Active Dims
197
+ - type: query_sparsity_ratio
198
+ value: 0.998046875
199
+ name: Query Sparsity Ratio
200
+ - type: corpus_active_dims
201
+ value: 8.0
202
+ name: Corpus Active Dims
203
+ - type: corpus_sparsity_ratio
204
+ value: 0.998046875
205
+ name: Corpus Sparsity Ratio
206
+ - task:
207
+ type: sparse-information-retrieval
208
+ name: Sparse Information Retrieval
209
+ dataset:
210
+ name: nq eval 16
211
+ type: nq_eval_16
212
+ metrics:
213
+ - type: cosine_accuracy@1
214
+ value: 0.61
215
+ name: Cosine Accuracy@1
216
+ - type: cosine_accuracy@3
217
+ value: 0.792
218
+ name: Cosine Accuracy@3
219
+ - type: cosine_accuracy@5
220
+ value: 0.843
221
+ name: Cosine Accuracy@5
222
+ - type: cosine_accuracy@10
223
+ value: 0.9
224
+ name: Cosine Accuracy@10
225
+ - type: cosine_precision@1
226
+ value: 0.61
227
+ name: Cosine Precision@1
228
+ - type: cosine_precision@3
229
+ value: 0.264
230
+ name: Cosine Precision@3
231
+ - type: cosine_precision@5
232
+ value: 0.16860000000000003
233
+ name: Cosine Precision@5
234
+ - type: cosine_precision@10
235
+ value: 0.09
236
+ name: Cosine Precision@10
237
+ - type: cosine_recall@1
238
+ value: 0.61
239
+ name: Cosine Recall@1
240
+ - type: cosine_recall@3
241
+ value: 0.792
242
+ name: Cosine Recall@3
243
+ - type: cosine_recall@5
244
+ value: 0.843
245
+ name: Cosine Recall@5
246
+ - type: cosine_recall@10
247
+ value: 0.9
248
+ name: Cosine Recall@10
249
+ - type: cosine_ndcg@10
250
+ value: 0.7573375805688765
251
+ name: Cosine Ndcg@10
252
+ - type: cosine_mrr@10
253
+ value: 0.7114896825396828
254
+ name: Cosine Mrr@10
255
+ - type: cosine_map@100
256
+ value: 0.7159603693257915
257
+ name: Cosine Map@100
258
+ - type: query_active_dims
259
+ value: 16.0
260
+ name: Query Active Dims
261
+ - type: query_sparsity_ratio
262
+ value: 0.99609375
263
+ name: Query Sparsity Ratio
264
+ - type: corpus_active_dims
265
+ value: 16.0
266
+ name: Corpus Active Dims
267
+ - type: corpus_sparsity_ratio
268
+ value: 0.99609375
269
+ name: Corpus Sparsity Ratio
270
+ - task:
271
+ type: sparse-information-retrieval
272
+ name: Sparse Information Retrieval
273
+ dataset:
274
+ name: nq eval 32
275
+ type: nq_eval_32
276
+ metrics:
277
+ - type: cosine_accuracy@1
278
+ value: 0.739
279
+ name: Cosine Accuracy@1
280
+ - type: cosine_accuracy@3
281
+ value: 0.871
282
+ name: Cosine Accuracy@3
283
+ - type: cosine_accuracy@5
284
+ value: 0.899
285
+ name: Cosine Accuracy@5
286
+ - type: cosine_accuracy@10
287
+ value: 0.936
288
+ name: Cosine Accuracy@10
289
+ - type: cosine_precision@1
290
+ value: 0.739
291
+ name: Cosine Precision@1
292
+ - type: cosine_precision@3
293
+ value: 0.2903333333333333
294
+ name: Cosine Precision@3
295
+ - type: cosine_precision@5
296
+ value: 0.17980000000000002
297
+ name: Cosine Precision@5
298
+ - type: cosine_precision@10
299
+ value: 0.0936
300
+ name: Cosine Precision@10
301
+ - type: cosine_recall@1
302
+ value: 0.739
303
+ name: Cosine Recall@1
304
+ - type: cosine_recall@3
305
+ value: 0.871
306
+ name: Cosine Recall@3
307
+ - type: cosine_recall@5
308
+ value: 0.899
309
+ name: Cosine Recall@5
310
+ - type: cosine_recall@10
311
+ value: 0.936
312
+ name: Cosine Recall@10
313
+ - type: cosine_ndcg@10
314
+ value: 0.8407099394827843
315
+ name: Cosine Ndcg@10
316
+ - type: cosine_mrr@10
317
+ value: 0.8098075396825399
318
+ name: Cosine Mrr@10
319
+ - type: cosine_map@100
320
+ value: 0.8124255549328265
321
+ name: Cosine Map@100
322
+ - type: query_active_dims
323
+ value: 32.0
324
+ name: Query Active Dims
325
+ - type: query_sparsity_ratio
326
+ value: 0.9921875
327
+ name: Query Sparsity Ratio
328
+ - type: corpus_active_dims
329
+ value: 32.0
330
+ name: Corpus Active Dims
331
+ - type: corpus_sparsity_ratio
332
+ value: 0.9921875
333
+ name: Corpus Sparsity Ratio
334
+ - task:
335
+ type: sparse-information-retrieval
336
+ name: Sparse Information Retrieval
337
+ dataset:
338
+ name: nq eval 64
339
+ type: nq_eval_64
340
+ metrics:
341
+ - type: cosine_accuracy@1
342
+ value: 0.775
343
+ name: Cosine Accuracy@1
344
+ - type: cosine_accuracy@3
345
+ value: 0.895
346
+ name: Cosine Accuracy@3
347
+ - type: cosine_accuracy@5
348
+ value: 0.925
349
+ name: Cosine Accuracy@5
350
+ - type: cosine_accuracy@10
351
+ value: 0.951
352
+ name: Cosine Accuracy@10
353
+ - type: cosine_precision@1
354
+ value: 0.775
355
+ name: Cosine Precision@1
356
+ - type: cosine_precision@3
357
+ value: 0.2983333333333333
358
+ name: Cosine Precision@3
359
+ - type: cosine_precision@5
360
+ value: 0.18500000000000003
361
+ name: Cosine Precision@5
362
+ - type: cosine_precision@10
363
+ value: 0.0951
364
+ name: Cosine Precision@10
365
+ - type: cosine_recall@1
366
+ value: 0.775
367
+ name: Cosine Recall@1
368
+ - type: cosine_recall@3
369
+ value: 0.895
370
+ name: Cosine Recall@3
371
+ - type: cosine_recall@5
372
+ value: 0.925
373
+ name: Cosine Recall@5
374
+ - type: cosine_recall@10
375
+ value: 0.951
376
+ name: Cosine Recall@10
377
+ - type: cosine_ndcg@10
378
+ value: 0.8672657281787072
379
+ name: Cosine Ndcg@10
380
+ - type: cosine_mrr@10
381
+ value: 0.8399420634920639
382
+ name: Cosine Mrr@10
383
+ - type: cosine_map@100
384
+ value: 0.8417827624389276
385
+ name: Cosine Map@100
386
+ - type: query_active_dims
387
+ value: 63.992000579833984
388
+ name: Query Active Dims
389
+ - type: query_sparsity_ratio
390
+ value: 0.984376952983439
391
+ name: Query Sparsity Ratio
392
+ - type: corpus_active_dims
393
+ value: 64.0
394
+ name: Corpus Active Dims
395
+ - type: corpus_sparsity_ratio
396
+ value: 0.984375
397
+ name: Corpus Sparsity Ratio
398
+ - task:
399
+ type: sparse-information-retrieval
400
+ name: Sparse Information Retrieval
401
+ dataset:
402
+ name: nq eval 128
403
+ type: nq_eval_128
404
+ metrics:
405
+ - type: cosine_accuracy@1
406
+ value: 0.797
407
+ name: Cosine Accuracy@1
408
+ - type: cosine_accuracy@3
409
+ value: 0.901
410
+ name: Cosine Accuracy@3
411
+ - type: cosine_accuracy@5
412
+ value: 0.933
413
+ name: Cosine Accuracy@5
414
+ - type: cosine_accuracy@10
415
+ value: 0.951
416
+ name: Cosine Accuracy@10
417
+ - type: cosine_precision@1
418
+ value: 0.797
419
+ name: Cosine Precision@1
420
+ - type: cosine_precision@3
421
+ value: 0.30033333333333334
422
+ name: Cosine Precision@3
423
+ - type: cosine_precision@5
424
+ value: 0.18660000000000002
425
+ name: Cosine Precision@5
426
+ - type: cosine_precision@10
427
+ value: 0.0951
428
+ name: Cosine Precision@10
429
+ - type: cosine_recall@1
430
+ value: 0.797
431
+ name: Cosine Recall@1
432
+ - type: cosine_recall@3
433
+ value: 0.901
434
+ name: Cosine Recall@3
435
+ - type: cosine_recall@5
436
+ value: 0.933
437
+ name: Cosine Recall@5
438
+ - type: cosine_recall@10
439
+ value: 0.951
440
+ name: Cosine Recall@10
441
+ - type: cosine_ndcg@10
442
+ value: 0.8780719613731008
443
+ name: Cosine Ndcg@10
444
+ - type: cosine_mrr@10
445
+ value: 0.8541857142857148
446
+ name: Cosine Mrr@10
447
+ - type: cosine_map@100
448
+ value: 0.8561013158199787
449
+ name: Cosine Map@100
450
+ - type: query_active_dims
451
+ value: 119.21700286865234
452
+ name: Query Active Dims
453
+ - type: query_sparsity_ratio
454
+ value: 0.9708942864090204
455
+ name: Query Sparsity Ratio
456
+ - type: corpus_active_dims
457
+ value: 119.6520004272461
458
+ name: Corpus Active Dims
459
+ - type: corpus_sparsity_ratio
460
+ value: 0.9707880858331919
461
+ name: Corpus Sparsity Ratio
462
+ - task:
463
+ type: sparse-information-retrieval
464
+ name: Sparse Information Retrieval
465
+ dataset:
466
+ name: nq eval 256
467
+ type: nq_eval_256
468
+ metrics:
469
+ - type: cosine_accuracy@1
470
+ value: 0.8
471
+ name: Cosine Accuracy@1
472
+ - type: cosine_accuracy@3
473
+ value: 0.901
474
+ name: Cosine Accuracy@3
475
+ - type: cosine_accuracy@5
476
+ value: 0.933
477
+ name: Cosine Accuracy@5
478
+ - type: cosine_accuracy@10
479
+ value: 0.951
480
+ name: Cosine Accuracy@10
481
+ - type: cosine_precision@1
482
+ value: 0.8
483
+ name: Cosine Precision@1
484
+ - type: cosine_precision@3
485
+ value: 0.30033333333333334
486
+ name: Cosine Precision@3
487
+ - type: cosine_precision@5
488
+ value: 0.18660000000000002
489
+ name: Cosine Precision@5
490
+ - type: cosine_precision@10
491
+ value: 0.0951
492
+ name: Cosine Precision@10
493
+ - type: cosine_recall@1
494
+ value: 0.8
495
+ name: Cosine Recall@1
496
+ - type: cosine_recall@3
497
+ value: 0.901
498
+ name: Cosine Recall@3
499
+ - type: cosine_recall@5
500
+ value: 0.933
501
+ name: Cosine Recall@5
502
+ - type: cosine_recall@10
503
+ value: 0.951
504
+ name: Cosine Recall@10
505
+ - type: cosine_ndcg@10
506
+ value: 0.8788975201919854
507
+ name: Cosine Ndcg@10
508
+ - type: cosine_mrr@10
509
+ value: 0.8553369047619053
510
+ name: Cosine Mrr@10
511
+ - type: cosine_map@100
512
+ value: 0.8573055135070745
513
+ name: Cosine Map@100
514
+ - type: query_active_dims
515
+ value: 133.42999267578125
516
+ name: Query Active Dims
517
+ - type: query_sparsity_ratio
518
+ value: 0.9674243181943893
519
+ name: Query Sparsity Ratio
520
+ - type: corpus_active_dims
521
+ value: 129.16900634765625
522
+ name: Corpus Active Dims
523
+ - type: corpus_sparsity_ratio
524
+ value: 0.9684645980596542
525
+ name: Corpus Sparsity Ratio
526
+ ---
527
+
528
+ # Sparse CSR model trained on Natural Questions
529
+
530
+ This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
531
+ ## Model Details
532
+
533
+ ### Model Description
534
+ - **Model Type:** CSR Sparse Encoder
535
+ - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
536
+ - **Maximum Sequence Length:** 512 tokens
537
+ - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
538
+ - **Similarity Function:** Cosine Similarity
539
+ - **Training Dataset:**
540
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
541
+ - **Language:** en
542
+ - **License:** apache-2.0
543
+
544
+ ### Model Sources
545
+
546
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
547
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
548
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
549
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
550
+
551
+ ### Full Model Architecture
552
+
553
+ ```
554
+ SparseEncoder(
555
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
556
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
557
+ (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
558
+ )
559
+ ```
560
+
561
+ ## Usage
562
+
563
+ ### Direct Usage (Sentence Transformers)
564
+
565
+ First install the Sentence Transformers library:
566
+
567
+ ```bash
568
+ pip install -U sentence-transformers
569
+ ```
570
+
571
+ Then you can load this model and run inference.
572
+ ```python
573
+ from sentence_transformers import SparseEncoder
574
+
575
+ # Download from the 🤗 Hub
576
+ model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-5-gamma-1-detach-2")
577
+ # Run inference
578
+ queries = [
579
+ "who is cornelius in the book of acts",
580
+ ]
581
+ documents = [
582
+ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
583
+ "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
584
+ 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
585
+ ]
586
+ query_embeddings = model.encode_query(queries)
587
+ document_embeddings = model.encode_document(documents)
588
+ print(query_embeddings.shape, document_embeddings.shape)
589
+ # [1, 4096] [3, 4096]
590
+
591
+ # Get the similarity scores for the embeddings
592
+ similarities = model.similarity(query_embeddings, document_embeddings)
593
+ print(similarities)
594
+ # tensor([[0.8907, 0.0410, 0.0237]])
595
+ ```
596
+
597
+ <!--
598
+ ### Direct Usage (Transformers)
599
+
600
+ <details><summary>Click to see the direct usage in Transformers</summary>
601
+
602
+ </details>
603
+ -->
604
+
605
+ <!--
606
+ ### Downstream Usage (Sentence Transformers)
607
+
608
+ You can finetune this model on your own dataset.
609
+
610
+ <details><summary>Click to expand</summary>
611
+
612
+ </details>
613
+ -->
614
+
615
+ <!--
616
+ ### Out-of-Scope Use
617
+
618
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
619
+ -->
620
+
621
+ ## Evaluation
622
+
623
+ ### Metrics
624
+
625
+ #### Sparse Information Retrieval
626
+
627
+ * Dataset: `nq_eval_4`
628
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
629
+ ```json
630
+ {
631
+ "max_active_dims": 4
632
+ }
633
+ ```
634
+
635
+ | Metric | Value |
636
+ |:----------------------|:-----------|
637
+ | cosine_accuracy@1 | 0.341 |
638
+ | cosine_accuracy@3 | 0.53 |
639
+ | cosine_accuracy@5 | 0.616 |
640
+ | cosine_accuracy@10 | 0.71 |
641
+ | cosine_precision@1 | 0.341 |
642
+ | cosine_precision@3 | 0.1767 |
643
+ | cosine_precision@5 | 0.1232 |
644
+ | cosine_precision@10 | 0.071 |
645
+ | cosine_recall@1 | 0.341 |
646
+ | cosine_recall@3 | 0.53 |
647
+ | cosine_recall@5 | 0.616 |
648
+ | cosine_recall@10 | 0.71 |
649
+ | **cosine_ndcg@10** | **0.5178** |
650
+ | cosine_mrr@10 | 0.457 |
651
+ | cosine_map@100 | 0.4681 |
652
+ | query_active_dims | 4.0 |
653
+ | query_sparsity_ratio | 0.999 |
654
+ | corpus_active_dims | 4.0 |
655
+ | corpus_sparsity_ratio | 0.999 |
656
+
657
+ #### Sparse Information Retrieval
658
+
659
+ * Dataset: `nq_eval_8`
660
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
661
+ ```json
662
+ {
663
+ "max_active_dims": 8
664
+ }
665
+ ```
666
+
667
+ | Metric | Value |
668
+ |:----------------------|:-----------|
669
+ | cosine_accuracy@1 | 0.479 |
670
+ | cosine_accuracy@3 | 0.683 |
671
+ | cosine_accuracy@5 | 0.743 |
672
+ | cosine_accuracy@10 | 0.827 |
673
+ | cosine_precision@1 | 0.479 |
674
+ | cosine_precision@3 | 0.2277 |
675
+ | cosine_precision@5 | 0.1486 |
676
+ | cosine_precision@10 | 0.0827 |
677
+ | cosine_recall@1 | 0.479 |
678
+ | cosine_recall@3 | 0.683 |
679
+ | cosine_recall@5 | 0.743 |
680
+ | cosine_recall@10 | 0.827 |
681
+ | **cosine_ndcg@10** | **0.6515** |
682
+ | cosine_mrr@10 | 0.5954 |
683
+ | cosine_map@100 | 0.6025 |
684
+ | query_active_dims | 8.0 |
685
+ | query_sparsity_ratio | 0.998 |
686
+ | corpus_active_dims | 8.0 |
687
+ | corpus_sparsity_ratio | 0.998 |
688
+
689
+ #### Sparse Information Retrieval
690
+
691
+ * Dataset: `nq_eval_16`
692
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
693
+ ```json
694
+ {
695
+ "max_active_dims": 16
696
+ }
697
+ ```
698
+
699
+ | Metric | Value |
700
+ |:----------------------|:-----------|
701
+ | cosine_accuracy@1 | 0.61 |
702
+ | cosine_accuracy@3 | 0.792 |
703
+ | cosine_accuracy@5 | 0.843 |
704
+ | cosine_accuracy@10 | 0.9 |
705
+ | cosine_precision@1 | 0.61 |
706
+ | cosine_precision@3 | 0.264 |
707
+ | cosine_precision@5 | 0.1686 |
708
+ | cosine_precision@10 | 0.09 |
709
+ | cosine_recall@1 | 0.61 |
710
+ | cosine_recall@3 | 0.792 |
711
+ | cosine_recall@5 | 0.843 |
712
+ | cosine_recall@10 | 0.9 |
713
+ | **cosine_ndcg@10** | **0.7573** |
714
+ | cosine_mrr@10 | 0.7115 |
715
+ | cosine_map@100 | 0.716 |
716
+ | query_active_dims | 16.0 |
717
+ | query_sparsity_ratio | 0.9961 |
718
+ | corpus_active_dims | 16.0 |
719
+ | corpus_sparsity_ratio | 0.9961 |
720
+
721
+ #### Sparse Information Retrieval
722
+
723
+ * Dataset: `nq_eval_32`
724
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
725
+ ```json
726
+ {
727
+ "max_active_dims": 32
728
+ }
729
+ ```
730
+
731
+ | Metric | Value |
732
+ |:----------------------|:-----------|
733
+ | cosine_accuracy@1 | 0.739 |
734
+ | cosine_accuracy@3 | 0.871 |
735
+ | cosine_accuracy@5 | 0.899 |
736
+ | cosine_accuracy@10 | 0.936 |
737
+ | cosine_precision@1 | 0.739 |
738
+ | cosine_precision@3 | 0.2903 |
739
+ | cosine_precision@5 | 0.1798 |
740
+ | cosine_precision@10 | 0.0936 |
741
+ | cosine_recall@1 | 0.739 |
742
+ | cosine_recall@3 | 0.871 |
743
+ | cosine_recall@5 | 0.899 |
744
+ | cosine_recall@10 | 0.936 |
745
+ | **cosine_ndcg@10** | **0.8407** |
746
+ | cosine_mrr@10 | 0.8098 |
747
+ | cosine_map@100 | 0.8124 |
748
+ | query_active_dims | 32.0 |
749
+ | query_sparsity_ratio | 0.9922 |
750
+ | corpus_active_dims | 32.0 |
751
+ | corpus_sparsity_ratio | 0.9922 |
752
+
753
+ #### Sparse Information Retrieval
754
+
755
+ * Dataset: `nq_eval_64`
756
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
757
+ ```json
758
+ {
759
+ "max_active_dims": 64
760
+ }
761
+ ```
762
+
763
+ | Metric | Value |
764
+ |:----------------------|:-----------|
765
+ | cosine_accuracy@1 | 0.775 |
766
+ | cosine_accuracy@3 | 0.895 |
767
+ | cosine_accuracy@5 | 0.925 |
768
+ | cosine_accuracy@10 | 0.951 |
769
+ | cosine_precision@1 | 0.775 |
770
+ | cosine_precision@3 | 0.2983 |
771
+ | cosine_precision@5 | 0.185 |
772
+ | cosine_precision@10 | 0.0951 |
773
+ | cosine_recall@1 | 0.775 |
774
+ | cosine_recall@3 | 0.895 |
775
+ | cosine_recall@5 | 0.925 |
776
+ | cosine_recall@10 | 0.951 |
777
+ | **cosine_ndcg@10** | **0.8673** |
778
+ | cosine_mrr@10 | 0.8399 |
779
+ | cosine_map@100 | 0.8418 |
780
+ | query_active_dims | 63.992 |
781
+ | query_sparsity_ratio | 0.9844 |
782
+ | corpus_active_dims | 64.0 |
783
+ | corpus_sparsity_ratio | 0.9844 |
784
+
785
+ #### Sparse Information Retrieval
786
+
787
+ * Dataset: `nq_eval_128`
788
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
789
+ ```json
790
+ {
791
+ "max_active_dims": 128
792
+ }
793
+ ```
794
+
795
+ | Metric | Value |
796
+ |:----------------------|:-----------|
797
+ | cosine_accuracy@1 | 0.797 |
798
+ | cosine_accuracy@3 | 0.901 |
799
+ | cosine_accuracy@5 | 0.933 |
800
+ | cosine_accuracy@10 | 0.951 |
801
+ | cosine_precision@1 | 0.797 |
802
+ | cosine_precision@3 | 0.3003 |
803
+ | cosine_precision@5 | 0.1866 |
804
+ | cosine_precision@10 | 0.0951 |
805
+ | cosine_recall@1 | 0.797 |
806
+ | cosine_recall@3 | 0.901 |
807
+ | cosine_recall@5 | 0.933 |
808
+ | cosine_recall@10 | 0.951 |
809
+ | **cosine_ndcg@10** | **0.8781** |
810
+ | cosine_mrr@10 | 0.8542 |
811
+ | cosine_map@100 | 0.8561 |
812
+ | query_active_dims | 119.217 |
813
+ | query_sparsity_ratio | 0.9709 |
814
+ | corpus_active_dims | 119.652 |
815
+ | corpus_sparsity_ratio | 0.9708 |
816
+
817
+ #### Sparse Information Retrieval
818
+
819
+ * Dataset: `nq_eval_256`
820
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
821
+ ```json
822
+ {
823
+ "max_active_dims": 256
824
+ }
825
+ ```
826
+
827
+ | Metric | Value |
828
+ |:----------------------|:-----------|
829
+ | cosine_accuracy@1 | 0.8 |
830
+ | cosine_accuracy@3 | 0.901 |
831
+ | cosine_accuracy@5 | 0.933 |
832
+ | cosine_accuracy@10 | 0.951 |
833
+ | cosine_precision@1 | 0.8 |
834
+ | cosine_precision@3 | 0.3003 |
835
+ | cosine_precision@5 | 0.1866 |
836
+ | cosine_precision@10 | 0.0951 |
837
+ | cosine_recall@1 | 0.8 |
838
+ | cosine_recall@3 | 0.901 |
839
+ | cosine_recall@5 | 0.933 |
840
+ | cosine_recall@10 | 0.951 |
841
+ | **cosine_ndcg@10** | **0.8789** |
842
+ | cosine_mrr@10 | 0.8553 |
843
+ | cosine_map@100 | 0.8573 |
844
+ | query_active_dims | 133.43 |
845
+ | query_sparsity_ratio | 0.9674 |
846
+ | corpus_active_dims | 129.169 |
847
+ | corpus_sparsity_ratio | 0.9685 |
848
+
849
+ <!--
850
+ ## Bias, Risks and Limitations
851
+
852
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
853
+ -->
854
+
855
+ <!--
856
+ ### Recommendations
857
+
858
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
859
+ -->
860
+
861
+ ## Training Details
862
+
863
+ ### Training Dataset
864
+
865
+ #### natural-questions
866
+
867
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
868
+ * Size: 99,000 training samples
869
+ * Columns: <code>query</code> and <code>answer</code>
870
+ * Approximate statistics based on the first 1000 samples:
871
+ | | query | answer |
872
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
873
+ | type | string | string |
874
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
875
+ * Samples:
876
+ | query | answer |
877
+ |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
878
+ | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
879
+ | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
880
+ | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
881
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
882
+ ```json
883
+ {
884
+ "beta": 0.1,
885
+ "gamma": 1.0,
886
+ "loss": "SparseMultipleNegativesRankingLoss(scale=5.0, similarity_fct='cos_sim')"
887
+ }
888
+ ```
889
+
890
+ ### Evaluation Dataset
891
+
892
+ #### natural-questions
893
+
894
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
895
+ * Size: 1,000 evaluation samples
896
+ * Columns: <code>query</code> and <code>answer</code>
897
+ * Approximate statistics based on the first 1000 samples:
898
+ | | query | answer |
899
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
900
+ | type | string | string |
901
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
902
+ * Samples:
903
+ | query | answer |
904
+ |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
905
+ | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
906
+ | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
907
+ | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
908
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
909
+ ```json
910
+ {
911
+ "beta": 0.1,
912
+ "gamma": 1.0,
913
+ "loss": "SparseMultipleNegativesRankingLoss(scale=5.0, similarity_fct='cos_sim')"
914
+ }
915
+ ```
916
+
917
+ ### Training Hyperparameters
918
+ #### Non-Default Hyperparameters
919
+
920
+ - `eval_strategy`: steps
921
+ - `per_device_train_batch_size`: 64
922
+ - `per_device_eval_batch_size`: 64
923
+ - `learning_rate`: 4e-05
924
+ - `num_train_epochs`: 1
925
+ - `bf16`: True
926
+ - `batch_sampler`: no_duplicates
927
+
928
+ #### All Hyperparameters
929
+ <details><summary>Click to expand</summary>
930
+
931
+ - `overwrite_output_dir`: False
932
+ - `do_predict`: False
933
+ - `eval_strategy`: steps
934
+ - `prediction_loss_only`: True
935
+ - `per_device_train_batch_size`: 64
936
+ - `per_device_eval_batch_size`: 64
937
+ - `per_gpu_train_batch_size`: None
938
+ - `per_gpu_eval_batch_size`: None
939
+ - `gradient_accumulation_steps`: 1
940
+ - `eval_accumulation_steps`: None
941
+ - `torch_empty_cache_steps`: None
942
+ - `learning_rate`: 4e-05
943
+ - `weight_decay`: 0.0
944
+ - `adam_beta1`: 0.9
945
+ - `adam_beta2`: 0.999
946
+ - `adam_epsilon`: 1e-08
947
+ - `max_grad_norm`: 1.0
948
+ - `num_train_epochs`: 1
949
+ - `max_steps`: -1
950
+ - `lr_scheduler_type`: linear
951
+ - `lr_scheduler_kwargs`: {}
952
+ - `warmup_ratio`: 0.0
953
+ - `warmup_steps`: 0
954
+ - `log_level`: passive
955
+ - `log_level_replica`: warning
956
+ - `log_on_each_node`: True
957
+ - `logging_nan_inf_filter`: True
958
+ - `save_safetensors`: True
959
+ - `save_on_each_node`: False
960
+ - `save_only_model`: False
961
+ - `restore_callback_states_from_checkpoint`: False
962
+ - `no_cuda`: False
963
+ - `use_cpu`: False
964
+ - `use_mps_device`: False
965
+ - `seed`: 42
966
+ - `data_seed`: None
967
+ - `jit_mode_eval`: False
968
+ - `use_ipex`: False
969
+ - `bf16`: True
970
+ - `fp16`: False
971
+ - `fp16_opt_level`: O1
972
+ - `half_precision_backend`: auto
973
+ - `bf16_full_eval`: False
974
+ - `fp16_full_eval`: False
975
+ - `tf32`: None
976
+ - `local_rank`: 0
977
+ - `ddp_backend`: None
978
+ - `tpu_num_cores`: None
979
+ - `tpu_metrics_debug`: False
980
+ - `debug`: []
981
+ - `dataloader_drop_last`: False
982
+ - `dataloader_num_workers`: 0
983
+ - `dataloader_prefetch_factor`: None
984
+ - `past_index`: -1
985
+ - `disable_tqdm`: False
986
+ - `remove_unused_columns`: True
987
+ - `label_names`: None
988
+ - `load_best_model_at_end`: False
989
+ - `ignore_data_skip`: False
990
+ - `fsdp`: []
991
+ - `fsdp_min_num_params`: 0
992
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
993
+ - `fsdp_transformer_layer_cls_to_wrap`: None
994
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
995
+ - `deepspeed`: None
996
+ - `label_smoothing_factor`: 0.0
997
+ - `optim`: adamw_torch
998
+ - `optim_args`: None
999
+ - `adafactor`: False
1000
+ - `group_by_length`: False
1001
+ - `length_column_name`: length
1002
+ - `ddp_find_unused_parameters`: None
1003
+ - `ddp_bucket_cap_mb`: None
1004
+ - `ddp_broadcast_buffers`: False
1005
+ - `dataloader_pin_memory`: True
1006
+ - `dataloader_persistent_workers`: False
1007
+ - `skip_memory_metrics`: True
1008
+ - `use_legacy_prediction_loop`: False
1009
+ - `push_to_hub`: False
1010
+ - `resume_from_checkpoint`: None
1011
+ - `hub_model_id`: None
1012
+ - `hub_strategy`: every_save
1013
+ - `hub_private_repo`: None
1014
+ - `hub_always_push`: False
1015
+ - `gradient_checkpointing`: False
1016
+ - `gradient_checkpointing_kwargs`: None
1017
+ - `include_inputs_for_metrics`: False
1018
+ - `include_for_metrics`: []
1019
+ - `eval_do_concat_batches`: True
1020
+ - `fp16_backend`: auto
1021
+ - `push_to_hub_model_id`: None
1022
+ - `push_to_hub_organization`: None
1023
+ - `mp_parameters`:
1024
+ - `auto_find_batch_size`: False
1025
+ - `full_determinism`: False
1026
+ - `torchdynamo`: None
1027
+ - `ray_scope`: last
1028
+ - `ddp_timeout`: 1800
1029
+ - `torch_compile`: False
1030
+ - `torch_compile_backend`: None
1031
+ - `torch_compile_mode`: None
1032
+ - `include_tokens_per_second`: False
1033
+ - `include_num_input_tokens_seen`: False
1034
+ - `neftune_noise_alpha`: None
1035
+ - `optim_target_modules`: None
1036
+ - `batch_eval_metrics`: False
1037
+ - `eval_on_start`: False
1038
+ - `use_liger_kernel`: False
1039
+ - `eval_use_gather_object`: False
1040
+ - `average_tokens_across_devices`: False
1041
+ - `prompts`: None
1042
+ - `batch_sampler`: no_duplicates
1043
+ - `multi_dataset_batch_sampler`: proportional
1044
+ - `router_mapping`: {}
1045
+ - `learning_rate_mapping`: {}
1046
+
1047
+ </details>
1048
+
1049
+ ### Training Logs
1050
+ | Epoch | Step | Training Loss | Validation Loss | nq_eval_4_cosine_ndcg@10 | nq_eval_8_cosine_ndcg@10 | nq_eval_16_cosine_ndcg@10 | nq_eval_32_cosine_ndcg@10 | nq_eval_64_cosine_ndcg@10 | nq_eval_128_cosine_ndcg@10 | nq_eval_256_cosine_ndcg@10 |
1051
+ |:------:|:----:|:-------------:|:---------------:|:------------------------:|:------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:--------------------------:|:--------------------------:|
1052
+ | -1 | -1 | - | - | 0.2566 | 0.4513 | 0.6853 | 0.8617 | 0.9369 | 0.9685 | 0.9757 |
1053
+ | 0.0646 | 100 | 2.9836 | - | - | - | - | - | - | - | - |
1054
+ | 0.1293 | 200 | 2.7758 | - | - | - | - | - | - | - | - |
1055
+ | 0.1939 | 300 | 2.6386 | 2.3891 | 0.4003 | 0.5884 | 0.7387 | 0.8220 | 0.8695 | 0.9164 | 0.9372 |
1056
+ | 0.2586 | 400 | 2.5466 | - | - | - | - | - | - | - | - |
1057
+ | 0.3232 | 500 | 2.4711 | - | - | - | - | - | - | - | - |
1058
+ | 0.3878 | 600 | 2.3918 | 2.1817 | 0.4580 | 0.6189 | 0.7230 | 0.7986 | 0.8554 | 0.8939 | 0.9146 |
1059
+ | 0.4525 | 700 | 2.2802 | - | - | - | - | - | - | - | - |
1060
+ | 0.5171 | 800 | 2.1309 | - | - | - | - | - | - | - | - |
1061
+ | 0.5818 | 900 | 2.0585 | 1.8844 | 0.4932 | 0.6402 | 0.7482 | 0.8361 | 0.8665 | 0.8857 | 0.8895 |
1062
+ | 0.6464 | 1000 | 2.0203 | - | - | - | - | - | - | - | - |
1063
+ | 0.7111 | 1100 | 1.9934 | - | - | - | - | - | - | - | - |
1064
+ | 0.7757 | 1200 | 1.9734 | 1.8208 | 0.5168 | 0.6452 | 0.7592 | 0.8371 | 0.8690 | 0.8775 | 0.8804 |
1065
+ | 0.8403 | 1300 | 1.9583 | - | - | - | - | - | - | - | - |
1066
+ | 0.9050 | 1400 | 1.9496 | - | - | - | - | - | - | - | - |
1067
+ | 0.9696 | 1500 | 1.9499 | 1.8020 | 0.5159 | 0.6536 | 0.7568 | 0.8399 | 0.8670 | 0.8785 | 0.8778 |
1068
+ | -1 | -1 | - | - | 0.5178 | 0.6515 | 0.7573 | 0.8407 | 0.8673 | 0.8781 | 0.8789 |
1069
+
1070
+
1071
+ ### Environmental Impact
1072
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
1073
+ - **Energy Consumed**: 0.110 kWh
1074
+ - **Carbon Emitted**: 0.043 kg of CO2
1075
+ - **Hours Used**: 0.274 hours
1076
+
1077
+ ### Training Hardware
1078
+ - **On Cloud**: No
1079
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
1080
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
1081
+ - **RAM Size**: 31.78 GB
1082
+
1083
+ ### Framework Versions
1084
+ - Python: 3.11.6
1085
+ - Sentence Transformers: 4.2.0.dev0
1086
+ - Transformers: 4.52.4
1087
+ - PyTorch: 2.6.0+cu124
1088
+ - Accelerate: 1.5.1
1089
+ - Datasets: 2.21.0
1090
+ - Tokenizers: 0.21.1
1091
+
1092
+ ## Citation
1093
+
1094
+ ### BibTeX
1095
+
1096
+ #### Sentence Transformers
1097
+ ```bibtex
1098
+ @inproceedings{reimers-2019-sentence-bert,
1099
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1100
+ author = "Reimers, Nils and Gurevych, Iryna",
1101
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1102
+ month = "11",
1103
+ year = "2019",
1104
+ publisher = "Association for Computational Linguistics",
1105
+ url = "https://arxiv.org/abs/1908.10084",
1106
+ }
1107
+ ```
1108
+
1109
+ #### CSRLoss
1110
+ ```bibtex
1111
+ @misc{wen2025matryoshkarevisitingsparsecoding,
1112
+ title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
1113
+ author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
1114
+ year={2025},
1115
+ eprint={2503.01776},
1116
+ archivePrefix={arXiv},
1117
+ primaryClass={cs.LG},
1118
+ url={https://arxiv.org/abs/2503.01776},
1119
+ }
1120
+ ```
1121
+
1122
+ #### SparseMultipleNegativesRankingLoss
1123
+ ```bibtex
1124
+ @misc{henderson2017efficient,
1125
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1126
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1127
+ year={2017},
1128
+ eprint={1705.00652},
1129
+ archivePrefix={arXiv},
1130
+ primaryClass={cs.CL}
1131
+ }
1132
+ ```
1133
+
1134
+ <!--
1135
+ ## Glossary
1136
+
1137
+ *Clearly define terms in order to be accessible across audiences.*
1138
+ -->
1139
+
1140
+ <!--
1141
+ ## Model Card Authors
1142
+
1143
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1144
+ -->
1145
+
1146
+ <!--
1147
+ ## Model Card Contact
1148
+
1149
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1150
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.4",
22
+ "type_vocab_size": 2,
23
+ "use_cache": false,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.2.0.dev0",
4
+ "transformers": "4.52.4",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: ",
9
+ "document": "",
10
+ "passage": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "model_type": "SparseEncoder",
14
+ "similarity_fn_name": "cosine"
15
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e86b2a89f7f8933cf7bd90586cdf69d0012140e412818234b234f807e51ee574
3
+ size 1340612432
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_CSRSparsity",
18
+ "type": "sentence_transformers.sparse_encoder.models.CSRSparsity"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff