tomaarsen HF Staff commited on
Commit
a00fb54
·
verified ·
1 Parent(s): f5a84e8

Add new SparseEncoder model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_CSRSparsity/config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "input_dim": 1024,
3
+ "hidden_dim": 4096,
4
+ "k": 256,
5
+ "k_aux": 512,
6
+ "normalize": false,
7
+ "dead_threshold": 30
8
+ }
2_CSRSparsity/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6aeb9fd15bd37b1911451ad0a13705f8060c9d8ec27152590a8aa98d5d8fd34
3
+ size 16830864
README.md ADDED
@@ -0,0 +1,2099 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - csr
10
+ - generated_from_trainer
11
+ - dataset_size:99000
12
+ - loss:CSRLoss
13
+ - loss:SparseMultipleNegativesRankingLoss
14
+ base_model: mixedbread-ai/mxbai-embed-large-v1
15
+ widget:
16
+ - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
17
+ continue to take somewhat differing stances on regional conflicts such the Yemeni
18
+ Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
19
+ which has fought against Saudi-backed forces, and the Syrian Civil War, where
20
+ the UAE has disagreed with Saudi support for Islamist movements.[4]
21
+ - text: Economy of New Zealand New Zealand's diverse market economy has a sizable
22
+ service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
23
+ manufacturing industries include aluminium production, food processing, metal
24
+ fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
25
+ water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
26
+ sector continues to dominate New Zealand's exports, despite accounting for 6.5%
27
+ of GDP in 2013.[17]
28
+ - text: who was the first president of indian science congress meeting held in kolkata
29
+ in 1914
30
+ - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
31
+ a single after a fourteen-year breakup. It was also the first song written by
32
+ bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
33
+ played live for the first time during their Hell Freezes Over tour in 1994. It
34
+ returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
35
+ No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
36
+ Rock Tracks chart. The song was not played live by the Eagles after the "Hell
37
+ Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
38
+ - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
39
+ who is considered by Christians to be one of the first Gentiles to convert to
40
+ the faith, as related in Acts of the Apostles.'
41
+ datasets:
42
+ - sentence-transformers/natural-questions
43
+ pipeline_tag: feature-extraction
44
+ library_name: sentence-transformers
45
+ metrics:
46
+ - dot_accuracy@1
47
+ - dot_accuracy@3
48
+ - dot_accuracy@5
49
+ - dot_accuracy@10
50
+ - dot_precision@1
51
+ - dot_precision@3
52
+ - dot_precision@5
53
+ - dot_precision@10
54
+ - dot_recall@1
55
+ - dot_recall@3
56
+ - dot_recall@5
57
+ - dot_recall@10
58
+ - dot_ndcg@10
59
+ - dot_mrr@10
60
+ - dot_map@100
61
+ - query_active_dims
62
+ - query_sparsity_ratio
63
+ - corpus_active_dims
64
+ - corpus_sparsity_ratio
65
+ co2_eq_emissions:
66
+ emissions: 53.740159900184786
67
+ energy_consumed: 0.13825542420719417
68
+ source: codecarbon
69
+ training_type: fine-tuning
70
+ on_cloud: false
71
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
72
+ ram_total_size: 31.777088165283203
73
+ hours_used: 0.409
74
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
75
+ model-index:
76
+ - name: Sparse CSR model trained on Natural Questions
77
+ results:
78
+ - task:
79
+ type: sparse-information-retrieval
80
+ name: Sparse Information Retrieval
81
+ dataset:
82
+ name: NanoMSMARCO 128
83
+ type: NanoMSMARCO_128
84
+ metrics:
85
+ - type: dot_accuracy@1
86
+ value: 0.38
87
+ name: Dot Accuracy@1
88
+ - type: dot_accuracy@3
89
+ value: 0.62
90
+ name: Dot Accuracy@3
91
+ - type: dot_accuracy@5
92
+ value: 0.72
93
+ name: Dot Accuracy@5
94
+ - type: dot_accuracy@10
95
+ value: 0.84
96
+ name: Dot Accuracy@10
97
+ - type: dot_precision@1
98
+ value: 0.38
99
+ name: Dot Precision@1
100
+ - type: dot_precision@3
101
+ value: 0.20666666666666667
102
+ name: Dot Precision@3
103
+ - type: dot_precision@5
104
+ value: 0.14400000000000002
105
+ name: Dot Precision@5
106
+ - type: dot_precision@10
107
+ value: 0.08399999999999999
108
+ name: Dot Precision@10
109
+ - type: dot_recall@1
110
+ value: 0.38
111
+ name: Dot Recall@1
112
+ - type: dot_recall@3
113
+ value: 0.62
114
+ name: Dot Recall@3
115
+ - type: dot_recall@5
116
+ value: 0.72
117
+ name: Dot Recall@5
118
+ - type: dot_recall@10
119
+ value: 0.84
120
+ name: Dot Recall@10
121
+ - type: dot_ndcg@10
122
+ value: 0.603846580732656
123
+ name: Dot Ndcg@10
124
+ - type: dot_mrr@10
125
+ value: 0.529079365079365
126
+ name: Dot Mrr@10
127
+ - type: dot_map@100
128
+ value: 0.535577429489216
129
+ name: Dot Map@100
130
+ - type: query_active_dims
131
+ value: 128.0
132
+ name: Query Active Dims
133
+ - type: query_sparsity_ratio
134
+ value: 0.96875
135
+ name: Query Sparsity Ratio
136
+ - type: corpus_active_dims
137
+ value: 128.0
138
+ name: Corpus Active Dims
139
+ - type: corpus_sparsity_ratio
140
+ value: 0.96875
141
+ name: Corpus Sparsity Ratio
142
+ - task:
143
+ type: sparse-information-retrieval
144
+ name: Sparse Information Retrieval
145
+ dataset:
146
+ name: NanoNFCorpus 128
147
+ type: NanoNFCorpus_128
148
+ metrics:
149
+ - type: dot_accuracy@1
150
+ value: 0.4
151
+ name: Dot Accuracy@1
152
+ - type: dot_accuracy@3
153
+ value: 0.52
154
+ name: Dot Accuracy@3
155
+ - type: dot_accuracy@5
156
+ value: 0.62
157
+ name: Dot Accuracy@5
158
+ - type: dot_accuracy@10
159
+ value: 0.68
160
+ name: Dot Accuracy@10
161
+ - type: dot_precision@1
162
+ value: 0.4
163
+ name: Dot Precision@1
164
+ - type: dot_precision@3
165
+ value: 0.34
166
+ name: Dot Precision@3
167
+ - type: dot_precision@5
168
+ value: 0.336
169
+ name: Dot Precision@5
170
+ - type: dot_precision@10
171
+ value: 0.28600000000000003
172
+ name: Dot Precision@10
173
+ - type: dot_recall@1
174
+ value: 0.02662938222230507
175
+ name: Dot Recall@1
176
+ - type: dot_recall@3
177
+ value: 0.08583886950771044
178
+ name: Dot Recall@3
179
+ - type: dot_recall@5
180
+ value: 0.10539572959638349
181
+ name: Dot Recall@5
182
+ - type: dot_recall@10
183
+ value: 0.1390606096616216
184
+ name: Dot Recall@10
185
+ - type: dot_ndcg@10
186
+ value: 0.33155673498755867
187
+ name: Dot Ndcg@10
188
+ - type: dot_mrr@10
189
+ value: 0.4815555555555555
190
+ name: Dot Mrr@10
191
+ - type: dot_map@100
192
+ value: 0.14591039936040862
193
+ name: Dot Map@100
194
+ - type: query_active_dims
195
+ value: 128.0
196
+ name: Query Active Dims
197
+ - type: query_sparsity_ratio
198
+ value: 0.96875
199
+ name: Query Sparsity Ratio
200
+ - type: corpus_active_dims
201
+ value: 128.0
202
+ name: Corpus Active Dims
203
+ - type: corpus_sparsity_ratio
204
+ value: 0.96875
205
+ name: Corpus Sparsity Ratio
206
+ - task:
207
+ type: sparse-information-retrieval
208
+ name: Sparse Information Retrieval
209
+ dataset:
210
+ name: NanoNQ 128
211
+ type: NanoNQ_128
212
+ metrics:
213
+ - type: dot_accuracy@1
214
+ value: 0.44
215
+ name: Dot Accuracy@1
216
+ - type: dot_accuracy@3
217
+ value: 0.64
218
+ name: Dot Accuracy@3
219
+ - type: dot_accuracy@5
220
+ value: 0.78
221
+ name: Dot Accuracy@5
222
+ - type: dot_accuracy@10
223
+ value: 0.8
224
+ name: Dot Accuracy@10
225
+ - type: dot_precision@1
226
+ value: 0.44
227
+ name: Dot Precision@1
228
+ - type: dot_precision@3
229
+ value: 0.21333333333333335
230
+ name: Dot Precision@3
231
+ - type: dot_precision@5
232
+ value: 0.16
233
+ name: Dot Precision@5
234
+ - type: dot_precision@10
235
+ value: 0.08399999999999999
236
+ name: Dot Precision@10
237
+ - type: dot_recall@1
238
+ value: 0.43
239
+ name: Dot Recall@1
240
+ - type: dot_recall@3
241
+ value: 0.6
242
+ name: Dot Recall@3
243
+ - type: dot_recall@5
244
+ value: 0.73
245
+ name: Dot Recall@5
246
+ - type: dot_recall@10
247
+ value: 0.76
248
+ name: Dot Recall@10
249
+ - type: dot_ndcg@10
250
+ value: 0.6020077639360719
251
+ name: Dot Ndcg@10
252
+ - type: dot_mrr@10
253
+ value: 0.5624999999999999
254
+ name: Dot Mrr@10
255
+ - type: dot_map@100
256
+ value: 0.5519887965031844
257
+ name: Dot Map@100
258
+ - type: query_active_dims
259
+ value: 128.0
260
+ name: Query Active Dims
261
+ - type: query_sparsity_ratio
262
+ value: 0.96875
263
+ name: Query Sparsity Ratio
264
+ - type: corpus_active_dims
265
+ value: 128.0
266
+ name: Corpus Active Dims
267
+ - type: corpus_sparsity_ratio
268
+ value: 0.96875
269
+ name: Corpus Sparsity Ratio
270
+ - task:
271
+ type: sparse-nano-beir
272
+ name: Sparse Nano BEIR
273
+ dataset:
274
+ name: NanoBEIR mean 128
275
+ type: NanoBEIR_mean_128
276
+ metrics:
277
+ - type: dot_accuracy@1
278
+ value: 0.4066666666666667
279
+ name: Dot Accuracy@1
280
+ - type: dot_accuracy@3
281
+ value: 0.5933333333333334
282
+ name: Dot Accuracy@3
283
+ - type: dot_accuracy@5
284
+ value: 0.7066666666666667
285
+ name: Dot Accuracy@5
286
+ - type: dot_accuracy@10
287
+ value: 0.7733333333333334
288
+ name: Dot Accuracy@10
289
+ - type: dot_precision@1
290
+ value: 0.4066666666666667
291
+ name: Dot Precision@1
292
+ - type: dot_precision@3
293
+ value: 0.25333333333333335
294
+ name: Dot Precision@3
295
+ - type: dot_precision@5
296
+ value: 0.21333333333333335
297
+ name: Dot Precision@5
298
+ - type: dot_precision@10
299
+ value: 0.15133333333333332
300
+ name: Dot Precision@10
301
+ - type: dot_recall@1
302
+ value: 0.27887646074076833
303
+ name: Dot Recall@1
304
+ - type: dot_recall@3
305
+ value: 0.4352796231692368
306
+ name: Dot Recall@3
307
+ - type: dot_recall@5
308
+ value: 0.5184652431987945
309
+ name: Dot Recall@5
310
+ - type: dot_recall@10
311
+ value: 0.5796868698872072
312
+ name: Dot Recall@10
313
+ - type: dot_ndcg@10
314
+ value: 0.5124703598854289
315
+ name: Dot Ndcg@10
316
+ - type: dot_mrr@10
317
+ value: 0.5243783068783068
318
+ name: Dot Mrr@10
319
+ - type: dot_map@100
320
+ value: 0.411158875117603
321
+ name: Dot Map@100
322
+ - type: query_active_dims
323
+ value: 128.0
324
+ name: Query Active Dims
325
+ - type: query_sparsity_ratio
326
+ value: 0.96875
327
+ name: Query Sparsity Ratio
328
+ - type: corpus_active_dims
329
+ value: 128.0
330
+ name: Corpus Active Dims
331
+ - type: corpus_sparsity_ratio
332
+ value: 0.96875
333
+ name: Corpus Sparsity Ratio
334
+ - task:
335
+ type: sparse-information-retrieval
336
+ name: Sparse Information Retrieval
337
+ dataset:
338
+ name: NanoMSMARCO 256
339
+ type: NanoMSMARCO_256
340
+ metrics:
341
+ - type: dot_accuracy@1
342
+ value: 0.44
343
+ name: Dot Accuracy@1
344
+ - type: dot_accuracy@3
345
+ value: 0.66
346
+ name: Dot Accuracy@3
347
+ - type: dot_accuracy@5
348
+ value: 0.78
349
+ name: Dot Accuracy@5
350
+ - type: dot_accuracy@10
351
+ value: 0.84
352
+ name: Dot Accuracy@10
353
+ - type: dot_precision@1
354
+ value: 0.44
355
+ name: Dot Precision@1
356
+ - type: dot_precision@3
357
+ value: 0.22
358
+ name: Dot Precision@3
359
+ - type: dot_precision@5
360
+ value: 0.156
361
+ name: Dot Precision@5
362
+ - type: dot_precision@10
363
+ value: 0.08399999999999999
364
+ name: Dot Precision@10
365
+ - type: dot_recall@1
366
+ value: 0.44
367
+ name: Dot Recall@1
368
+ - type: dot_recall@3
369
+ value: 0.66
370
+ name: Dot Recall@3
371
+ - type: dot_recall@5
372
+ value: 0.78
373
+ name: Dot Recall@5
374
+ - type: dot_recall@10
375
+ value: 0.84
376
+ name: Dot Recall@10
377
+ - type: dot_ndcg@10
378
+ value: 0.6402220356297674
379
+ name: Dot Ndcg@10
380
+ - type: dot_mrr@10
381
+ value: 0.576079365079365
382
+ name: Dot Mrr@10
383
+ - type: dot_map@100
384
+ value: 0.5819739218018417
385
+ name: Dot Map@100
386
+ - type: query_active_dims
387
+ value: 256.0
388
+ name: Query Active Dims
389
+ - type: query_sparsity_ratio
390
+ value: 0.9375
391
+ name: Query Sparsity Ratio
392
+ - type: corpus_active_dims
393
+ value: 256.0
394
+ name: Corpus Active Dims
395
+ - type: corpus_sparsity_ratio
396
+ value: 0.9375
397
+ name: Corpus Sparsity Ratio
398
+ - task:
399
+ type: sparse-information-retrieval
400
+ name: Sparse Information Retrieval
401
+ dataset:
402
+ name: NanoNFCorpus 256
403
+ type: NanoNFCorpus_256
404
+ metrics:
405
+ - type: dot_accuracy@1
406
+ value: 0.42
407
+ name: Dot Accuracy@1
408
+ - type: dot_accuracy@3
409
+ value: 0.54
410
+ name: Dot Accuracy@3
411
+ - type: dot_accuracy@5
412
+ value: 0.58
413
+ name: Dot Accuracy@5
414
+ - type: dot_accuracy@10
415
+ value: 0.7
416
+ name: Dot Accuracy@10
417
+ - type: dot_precision@1
418
+ value: 0.42
419
+ name: Dot Precision@1
420
+ - type: dot_precision@3
421
+ value: 0.35999999999999993
422
+ name: Dot Precision@3
423
+ - type: dot_precision@5
424
+ value: 0.344
425
+ name: Dot Precision@5
426
+ - type: dot_precision@10
427
+ value: 0.29200000000000004
428
+ name: Dot Precision@10
429
+ - type: dot_recall@1
430
+ value: 0.018848269093365854
431
+ name: Dot Recall@1
432
+ - type: dot_recall@3
433
+ value: 0.07354907247001424
434
+ name: Dot Recall@3
435
+ - type: dot_recall@5
436
+ value: 0.09781289475269293
437
+ name: Dot Recall@5
438
+ - type: dot_recall@10
439
+ value: 0.1418672876485781
440
+ name: Dot Recall@10
441
+ - type: dot_ndcg@10
442
+ value: 0.33652365839683074
443
+ name: Dot Ndcg@10
444
+ - type: dot_mrr@10
445
+ value: 0.4957698412698413
446
+ name: Dot Mrr@10
447
+ - type: dot_map@100
448
+ value: 0.14165509490208594
449
+ name: Dot Map@100
450
+ - type: query_active_dims
451
+ value: 256.0
452
+ name: Query Active Dims
453
+ - type: query_sparsity_ratio
454
+ value: 0.9375
455
+ name: Query Sparsity Ratio
456
+ - type: corpus_active_dims
457
+ value: 256.0
458
+ name: Corpus Active Dims
459
+ - type: corpus_sparsity_ratio
460
+ value: 0.9375
461
+ name: Corpus Sparsity Ratio
462
+ - task:
463
+ type: sparse-information-retrieval
464
+ name: Sparse Information Retrieval
465
+ dataset:
466
+ name: NanoNQ 256
467
+ type: NanoNQ_256
468
+ metrics:
469
+ - type: dot_accuracy@1
470
+ value: 0.56
471
+ name: Dot Accuracy@1
472
+ - type: dot_accuracy@3
473
+ value: 0.7
474
+ name: Dot Accuracy@3
475
+ - type: dot_accuracy@5
476
+ value: 0.78
477
+ name: Dot Accuracy@5
478
+ - type: dot_accuracy@10
479
+ value: 0.86
480
+ name: Dot Accuracy@10
481
+ - type: dot_precision@1
482
+ value: 0.56
483
+ name: Dot Precision@1
484
+ - type: dot_precision@3
485
+ value: 0.23333333333333336
486
+ name: Dot Precision@3
487
+ - type: dot_precision@5
488
+ value: 0.16
489
+ name: Dot Precision@5
490
+ - type: dot_precision@10
491
+ value: 0.09399999999999999
492
+ name: Dot Precision@10
493
+ - type: dot_recall@1
494
+ value: 0.54
495
+ name: Dot Recall@1
496
+ - type: dot_recall@3
497
+ value: 0.65
498
+ name: Dot Recall@3
499
+ - type: dot_recall@5
500
+ value: 0.73
501
+ name: Dot Recall@5
502
+ - type: dot_recall@10
503
+ value: 0.83
504
+ name: Dot Recall@10
505
+ - type: dot_ndcg@10
506
+ value: 0.6813657040884066
507
+ name: Dot Ndcg@10
508
+ - type: dot_mrr@10
509
+ value: 0.647301587301587
510
+ name: Dot Mrr@10
511
+ - type: dot_map@100
512
+ value: 0.6310147772294485
513
+ name: Dot Map@100
514
+ - type: query_active_dims
515
+ value: 256.0
516
+ name: Query Active Dims
517
+ - type: query_sparsity_ratio
518
+ value: 0.9375
519
+ name: Query Sparsity Ratio
520
+ - type: corpus_active_dims
521
+ value: 256.0
522
+ name: Corpus Active Dims
523
+ - type: corpus_sparsity_ratio
524
+ value: 0.9375
525
+ name: Corpus Sparsity Ratio
526
+ - task:
527
+ type: sparse-nano-beir
528
+ name: Sparse Nano BEIR
529
+ dataset:
530
+ name: NanoBEIR mean 256
531
+ type: NanoBEIR_mean_256
532
+ metrics:
533
+ - type: dot_accuracy@1
534
+ value: 0.47333333333333333
535
+ name: Dot Accuracy@1
536
+ - type: dot_accuracy@3
537
+ value: 0.6333333333333334
538
+ name: Dot Accuracy@3
539
+ - type: dot_accuracy@5
540
+ value: 0.7133333333333333
541
+ name: Dot Accuracy@5
542
+ - type: dot_accuracy@10
543
+ value: 0.7999999999999999
544
+ name: Dot Accuracy@10
545
+ - type: dot_precision@1
546
+ value: 0.47333333333333333
547
+ name: Dot Precision@1
548
+ - type: dot_precision@3
549
+ value: 0.27111111111111114
550
+ name: Dot Precision@3
551
+ - type: dot_precision@5
552
+ value: 0.22
553
+ name: Dot Precision@5
554
+ - type: dot_precision@10
555
+ value: 0.15666666666666665
556
+ name: Dot Precision@10
557
+ - type: dot_recall@1
558
+ value: 0.33294942303112196
559
+ name: Dot Recall@1
560
+ - type: dot_recall@3
561
+ value: 0.46118302415667145
562
+ name: Dot Recall@3
563
+ - type: dot_recall@5
564
+ value: 0.5359376315842309
565
+ name: Dot Recall@5
566
+ - type: dot_recall@10
567
+ value: 0.6039557625495261
568
+ name: Dot Recall@10
569
+ - type: dot_ndcg@10
570
+ value: 0.5527037993716682
571
+ name: Dot Ndcg@10
572
+ - type: dot_mrr@10
573
+ value: 0.5730502645502644
574
+ name: Dot Mrr@10
575
+ - type: dot_map@100
576
+ value: 0.4515479313111254
577
+ name: Dot Map@100
578
+ - type: query_active_dims
579
+ value: 256.0
580
+ name: Query Active Dims
581
+ - type: query_sparsity_ratio
582
+ value: 0.9375
583
+ name: Query Sparsity Ratio
584
+ - type: corpus_active_dims
585
+ value: 256.0
586
+ name: Corpus Active Dims
587
+ - type: corpus_sparsity_ratio
588
+ value: 0.9375
589
+ name: Corpus Sparsity Ratio
590
+ - task:
591
+ type: sparse-information-retrieval
592
+ name: Sparse Information Retrieval
593
+ dataset:
594
+ name: NanoClimateFEVER
595
+ type: NanoClimateFEVER
596
+ metrics:
597
+ - type: dot_accuracy@1
598
+ value: 0.2
599
+ name: Dot Accuracy@1
600
+ - type: dot_accuracy@3
601
+ value: 0.52
602
+ name: Dot Accuracy@3
603
+ - type: dot_accuracy@5
604
+ value: 0.56
605
+ name: Dot Accuracy@5
606
+ - type: dot_accuracy@10
607
+ value: 0.68
608
+ name: Dot Accuracy@10
609
+ - type: dot_precision@1
610
+ value: 0.2
611
+ name: Dot Precision@1
612
+ - type: dot_precision@3
613
+ value: 0.19333333333333333
614
+ name: Dot Precision@3
615
+ - type: dot_precision@5
616
+ value: 0.132
617
+ name: Dot Precision@5
618
+ - type: dot_precision@10
619
+ value: 0.088
620
+ name: Dot Precision@10
621
+ - type: dot_recall@1
622
+ value: 0.07833333333333332
623
+ name: Dot Recall@1
624
+ - type: dot_recall@3
625
+ value: 0.24499999999999997
626
+ name: Dot Recall@3
627
+ - type: dot_recall@5
628
+ value: 0.28333333333333327
629
+ name: Dot Recall@5
630
+ - type: dot_recall@10
631
+ value: 0.3473333333333333
632
+ name: Dot Recall@10
633
+ - type: dot_ndcg@10
634
+ value: 0.27333419680435084
635
+ name: Dot Ndcg@10
636
+ - type: dot_mrr@10
637
+ value: 0.3666031746031747
638
+ name: Dot Mrr@10
639
+ - type: dot_map@100
640
+ value: 0.21266834216817831
641
+ name: Dot Map@100
642
+ - type: query_active_dims
643
+ value: 256.0
644
+ name: Query Active Dims
645
+ - type: query_sparsity_ratio
646
+ value: 0.9375
647
+ name: Query Sparsity Ratio
648
+ - type: corpus_active_dims
649
+ value: 256.0
650
+ name: Corpus Active Dims
651
+ - type: corpus_sparsity_ratio
652
+ value: 0.9375
653
+ name: Corpus Sparsity Ratio
654
+ - task:
655
+ type: sparse-information-retrieval
656
+ name: Sparse Information Retrieval
657
+ dataset:
658
+ name: NanoDBPedia
659
+ type: NanoDBPedia
660
+ metrics:
661
+ - type: dot_accuracy@1
662
+ value: 0.74
663
+ name: Dot Accuracy@1
664
+ - type: dot_accuracy@3
665
+ value: 0.86
666
+ name: Dot Accuracy@3
667
+ - type: dot_accuracy@5
668
+ value: 0.92
669
+ name: Dot Accuracy@5
670
+ - type: dot_accuracy@10
671
+ value: 0.94
672
+ name: Dot Accuracy@10
673
+ - type: dot_precision@1
674
+ value: 0.74
675
+ name: Dot Precision@1
676
+ - type: dot_precision@3
677
+ value: 0.5866666666666667
678
+ name: Dot Precision@3
679
+ - type: dot_precision@5
680
+ value: 0.556
681
+ name: Dot Precision@5
682
+ - type: dot_precision@10
683
+ value: 0.484
684
+ name: Dot Precision@10
685
+ - type: dot_recall@1
686
+ value: 0.08366724054361292
687
+ name: Dot Recall@1
688
+ - type: dot_recall@3
689
+ value: 0.16227352802558825
690
+ name: Dot Recall@3
691
+ - type: dot_recall@5
692
+ value: 0.2213882427797012
693
+ name: Dot Recall@5
694
+ - type: dot_recall@10
695
+ value: 0.3353731792736538
696
+ name: Dot Recall@10
697
+ - type: dot_ndcg@10
698
+ value: 0.5972307350486245
699
+ name: Dot Ndcg@10
700
+ - type: dot_mrr@10
701
+ value: 0.8152222222222223
702
+ name: Dot Mrr@10
703
+ - type: dot_map@100
704
+ value: 0.45303559906331897
705
+ name: Dot Map@100
706
+ - type: query_active_dims
707
+ value: 256.0
708
+ name: Query Active Dims
709
+ - type: query_sparsity_ratio
710
+ value: 0.9375
711
+ name: Query Sparsity Ratio
712
+ - type: corpus_active_dims
713
+ value: 256.0
714
+ name: Corpus Active Dims
715
+ - type: corpus_sparsity_ratio
716
+ value: 0.9375
717
+ name: Corpus Sparsity Ratio
718
+ - task:
719
+ type: sparse-information-retrieval
720
+ name: Sparse Information Retrieval
721
+ dataset:
722
+ name: NanoFEVER
723
+ type: NanoFEVER
724
+ metrics:
725
+ - type: dot_accuracy@1
726
+ value: 0.86
727
+ name: Dot Accuracy@1
728
+ - type: dot_accuracy@3
729
+ value: 0.98
730
+ name: Dot Accuracy@3
731
+ - type: dot_accuracy@5
732
+ value: 0.98
733
+ name: Dot Accuracy@5
734
+ - type: dot_accuracy@10
735
+ value: 0.98
736
+ name: Dot Accuracy@10
737
+ - type: dot_precision@1
738
+ value: 0.86
739
+ name: Dot Precision@1
740
+ - type: dot_precision@3
741
+ value: 0.34666666666666657
742
+ name: Dot Precision@3
743
+ - type: dot_precision@5
744
+ value: 0.20799999999999996
745
+ name: Dot Precision@5
746
+ - type: dot_precision@10
747
+ value: 0.10399999999999998
748
+ name: Dot Precision@10
749
+ - type: dot_recall@1
750
+ value: 0.8066666666666668
751
+ name: Dot Recall@1
752
+ - type: dot_recall@3
753
+ value: 0.9433333333333332
754
+ name: Dot Recall@3
755
+ - type: dot_recall@5
756
+ value: 0.9433333333333332
757
+ name: Dot Recall@5
758
+ - type: dot_recall@10
759
+ value: 0.9433333333333332
760
+ name: Dot Recall@10
761
+ - type: dot_ndcg@10
762
+ value: 0.9054259418093692
763
+ name: Dot Ndcg@10
764
+ - type: dot_mrr@10
765
+ value: 0.9133333333333333
766
+ name: Dot Mrr@10
767
+ - type: dot_map@100
768
+ value: 0.8844551282051283
769
+ name: Dot Map@100
770
+ - type: query_active_dims
771
+ value: 256.0
772
+ name: Query Active Dims
773
+ - type: query_sparsity_ratio
774
+ value: 0.9375
775
+ name: Query Sparsity Ratio
776
+ - type: corpus_active_dims
777
+ value: 256.0
778
+ name: Corpus Active Dims
779
+ - type: corpus_sparsity_ratio
780
+ value: 0.9375
781
+ name: Corpus Sparsity Ratio
782
+ - task:
783
+ type: sparse-information-retrieval
784
+ name: Sparse Information Retrieval
785
+ dataset:
786
+ name: NanoFiQA2018
787
+ type: NanoFiQA2018
788
+ metrics:
789
+ - type: dot_accuracy@1
790
+ value: 0.5
791
+ name: Dot Accuracy@1
792
+ - type: dot_accuracy@3
793
+ value: 0.62
794
+ name: Dot Accuracy@3
795
+ - type: dot_accuracy@5
796
+ value: 0.64
797
+ name: Dot Accuracy@5
798
+ - type: dot_accuracy@10
799
+ value: 0.68
800
+ name: Dot Accuracy@10
801
+ - type: dot_precision@1
802
+ value: 0.5
803
+ name: Dot Precision@1
804
+ - type: dot_precision@3
805
+ value: 0.3133333333333333
806
+ name: Dot Precision@3
807
+ - type: dot_precision@5
808
+ value: 0.22399999999999998
809
+ name: Dot Precision@5
810
+ - type: dot_precision@10
811
+ value: 0.13799999999999998
812
+ name: Dot Precision@10
813
+ - type: dot_recall@1
814
+ value: 0.2725793650793651
815
+ name: Dot Recall@1
816
+ - type: dot_recall@3
817
+ value: 0.4129047619047619
818
+ name: Dot Recall@3
819
+ - type: dot_recall@5
820
+ value: 0.4605714285714286
821
+ name: Dot Recall@5
822
+ - type: dot_recall@10
823
+ value: 0.5500873015873016
824
+ name: Dot Recall@10
825
+ - type: dot_ndcg@10
826
+ value: 0.49585690755175454
827
+ name: Dot Ndcg@10
828
+ - type: dot_mrr@10
829
+ value: 0.5641666666666666
830
+ name: Dot Mrr@10
831
+ - type: dot_map@100
832
+ value: 0.4425504355719097
833
+ name: Dot Map@100
834
+ - type: query_active_dims
835
+ value: 256.0
836
+ name: Query Active Dims
837
+ - type: query_sparsity_ratio
838
+ value: 0.9375
839
+ name: Query Sparsity Ratio
840
+ - type: corpus_active_dims
841
+ value: 256.0
842
+ name: Corpus Active Dims
843
+ - type: corpus_sparsity_ratio
844
+ value: 0.9375
845
+ name: Corpus Sparsity Ratio
846
+ - task:
847
+ type: sparse-information-retrieval
848
+ name: Sparse Information Retrieval
849
+ dataset:
850
+ name: NanoHotpotQA
851
+ type: NanoHotpotQA
852
+ metrics:
853
+ - type: dot_accuracy@1
854
+ value: 0.84
855
+ name: Dot Accuracy@1
856
+ - type: dot_accuracy@3
857
+ value: 0.92
858
+ name: Dot Accuracy@3
859
+ - type: dot_accuracy@5
860
+ value: 0.96
861
+ name: Dot Accuracy@5
862
+ - type: dot_accuracy@10
863
+ value: 0.96
864
+ name: Dot Accuracy@10
865
+ - type: dot_precision@1
866
+ value: 0.84
867
+ name: Dot Precision@1
868
+ - type: dot_precision@3
869
+ value: 0.4733333333333333
870
+ name: Dot Precision@3
871
+ - type: dot_precision@5
872
+ value: 0.316
873
+ name: Dot Precision@5
874
+ - type: dot_precision@10
875
+ value: 0.17399999999999996
876
+ name: Dot Precision@10
877
+ - type: dot_recall@1
878
+ value: 0.42
879
+ name: Dot Recall@1
880
+ - type: dot_recall@3
881
+ value: 0.71
882
+ name: Dot Recall@3
883
+ - type: dot_recall@5
884
+ value: 0.79
885
+ name: Dot Recall@5
886
+ - type: dot_recall@10
887
+ value: 0.87
888
+ name: Dot Recall@10
889
+ - type: dot_ndcg@10
890
+ value: 0.802663278529999
891
+ name: Dot Ndcg@10
892
+ - type: dot_mrr@10
893
+ value: 0.8856666666666666
894
+ name: Dot Mrr@10
895
+ - type: dot_map@100
896
+ value: 0.7334779802028212
897
+ name: Dot Map@100
898
+ - type: query_active_dims
899
+ value: 256.0
900
+ name: Query Active Dims
901
+ - type: query_sparsity_ratio
902
+ value: 0.9375
903
+ name: Query Sparsity Ratio
904
+ - type: corpus_active_dims
905
+ value: 256.0
906
+ name: Corpus Active Dims
907
+ - type: corpus_sparsity_ratio
908
+ value: 0.9375
909
+ name: Corpus Sparsity Ratio
910
+ - task:
911
+ type: sparse-information-retrieval
912
+ name: Sparse Information Retrieval
913
+ dataset:
914
+ name: NanoMSMARCO
915
+ type: NanoMSMARCO
916
+ metrics:
917
+ - type: dot_accuracy@1
918
+ value: 0.42
919
+ name: Dot Accuracy@1
920
+ - type: dot_accuracy@3
921
+ value: 0.66
922
+ name: Dot Accuracy@3
923
+ - type: dot_accuracy@5
924
+ value: 0.78
925
+ name: Dot Accuracy@5
926
+ - type: dot_accuracy@10
927
+ value: 0.84
928
+ name: Dot Accuracy@10
929
+ - type: dot_precision@1
930
+ value: 0.42
931
+ name: Dot Precision@1
932
+ - type: dot_precision@3
933
+ value: 0.22
934
+ name: Dot Precision@3
935
+ - type: dot_precision@5
936
+ value: 0.156
937
+ name: Dot Precision@5
938
+ - type: dot_precision@10
939
+ value: 0.08399999999999999
940
+ name: Dot Precision@10
941
+ - type: dot_recall@1
942
+ value: 0.42
943
+ name: Dot Recall@1
944
+ - type: dot_recall@3
945
+ value: 0.66
946
+ name: Dot Recall@3
947
+ - type: dot_recall@5
948
+ value: 0.78
949
+ name: Dot Recall@5
950
+ - type: dot_recall@10
951
+ value: 0.84
952
+ name: Dot Recall@10
953
+ - type: dot_ndcg@10
954
+ value: 0.6354592257726257
955
+ name: Dot Ndcg@10
956
+ - type: dot_mrr@10
957
+ value: 0.5694126984126984
958
+ name: Dot Mrr@10
959
+ - type: dot_map@100
960
+ value: 0.5752130160409359
961
+ name: Dot Map@100
962
+ - type: query_active_dims
963
+ value: 256.0
964
+ name: Query Active Dims
965
+ - type: query_sparsity_ratio
966
+ value: 0.9375
967
+ name: Query Sparsity Ratio
968
+ - type: corpus_active_dims
969
+ value: 256.0
970
+ name: Corpus Active Dims
971
+ - type: corpus_sparsity_ratio
972
+ value: 0.9375
973
+ name: Corpus Sparsity Ratio
974
+ - task:
975
+ type: sparse-information-retrieval
976
+ name: Sparse Information Retrieval
977
+ dataset:
978
+ name: NanoNFCorpus
979
+ type: NanoNFCorpus
980
+ metrics:
981
+ - type: dot_accuracy@1
982
+ value: 0.42
983
+ name: Dot Accuracy@1
984
+ - type: dot_accuracy@3
985
+ value: 0.54
986
+ name: Dot Accuracy@3
987
+ - type: dot_accuracy@5
988
+ value: 0.58
989
+ name: Dot Accuracy@5
990
+ - type: dot_accuracy@10
991
+ value: 0.7
992
+ name: Dot Accuracy@10
993
+ - type: dot_precision@1
994
+ value: 0.42
995
+ name: Dot Precision@1
996
+ - type: dot_precision@3
997
+ value: 0.35999999999999993
998
+ name: Dot Precision@3
999
+ - type: dot_precision@5
1000
+ value: 0.34
1001
+ name: Dot Precision@5
1002
+ - type: dot_precision@10
1003
+ value: 0.29
1004
+ name: Dot Precision@10
1005
+ - type: dot_recall@1
1006
+ value: 0.018848269093365854
1007
+ name: Dot Recall@1
1008
+ - type: dot_recall@3
1009
+ value: 0.07354907247001424
1010
+ name: Dot Recall@3
1011
+ - type: dot_recall@5
1012
+ value: 0.0962744332142314
1013
+ name: Dot Recall@5
1014
+ - type: dot_recall@10
1015
+ value: 0.14178823626517886
1016
+ name: Dot Recall@10
1017
+ - type: dot_ndcg@10
1018
+ value: 0.3352519406973144
1019
+ name: Dot Ndcg@10
1020
+ - type: dot_mrr@10
1021
+ value: 0.49602380952380964
1022
+ name: Dot Mrr@10
1023
+ - type: dot_map@100
1024
+ value: 0.14142955254174144
1025
+ name: Dot Map@100
1026
+ - type: query_active_dims
1027
+ value: 256.0
1028
+ name: Query Active Dims
1029
+ - type: query_sparsity_ratio
1030
+ value: 0.9375
1031
+ name: Query Sparsity Ratio
1032
+ - type: corpus_active_dims
1033
+ value: 256.0
1034
+ name: Corpus Active Dims
1035
+ - type: corpus_sparsity_ratio
1036
+ value: 0.9375
1037
+ name: Corpus Sparsity Ratio
1038
+ - task:
1039
+ type: sparse-information-retrieval
1040
+ name: Sparse Information Retrieval
1041
+ dataset:
1042
+ name: NanoNQ
1043
+ type: NanoNQ
1044
+ metrics:
1045
+ - type: dot_accuracy@1
1046
+ value: 0.56
1047
+ name: Dot Accuracy@1
1048
+ - type: dot_accuracy@3
1049
+ value: 0.7
1050
+ name: Dot Accuracy@3
1051
+ - type: dot_accuracy@5
1052
+ value: 0.78
1053
+ name: Dot Accuracy@5
1054
+ - type: dot_accuracy@10
1055
+ value: 0.86
1056
+ name: Dot Accuracy@10
1057
+ - type: dot_precision@1
1058
+ value: 0.56
1059
+ name: Dot Precision@1
1060
+ - type: dot_precision@3
1061
+ value: 0.23333333333333336
1062
+ name: Dot Precision@3
1063
+ - type: dot_precision@5
1064
+ value: 0.16
1065
+ name: Dot Precision@5
1066
+ - type: dot_precision@10
1067
+ value: 0.09399999999999999
1068
+ name: Dot Precision@10
1069
+ - type: dot_recall@1
1070
+ value: 0.54
1071
+ name: Dot Recall@1
1072
+ - type: dot_recall@3
1073
+ value: 0.65
1074
+ name: Dot Recall@3
1075
+ - type: dot_recall@5
1076
+ value: 0.73
1077
+ name: Dot Recall@5
1078
+ - type: dot_recall@10
1079
+ value: 0.83
1080
+ name: Dot Recall@10
1081
+ - type: dot_ndcg@10
1082
+ value: 0.6813657040884066
1083
+ name: Dot Ndcg@10
1084
+ - type: dot_mrr@10
1085
+ value: 0.647301587301587
1086
+ name: Dot Mrr@10
1087
+ - type: dot_map@100
1088
+ value: 0.6311451301239768
1089
+ name: Dot Map@100
1090
+ - type: query_active_dims
1091
+ value: 256.0
1092
+ name: Query Active Dims
1093
+ - type: query_sparsity_ratio
1094
+ value: 0.9375
1095
+ name: Query Sparsity Ratio
1096
+ - type: corpus_active_dims
1097
+ value: 256.0
1098
+ name: Corpus Active Dims
1099
+ - type: corpus_sparsity_ratio
1100
+ value: 0.9375
1101
+ name: Corpus Sparsity Ratio
1102
+ - task:
1103
+ type: sparse-information-retrieval
1104
+ name: Sparse Information Retrieval
1105
+ dataset:
1106
+ name: NanoQuoraRetrieval
1107
+ type: NanoQuoraRetrieval
1108
+ metrics:
1109
+ - type: dot_accuracy@1
1110
+ value: 0.86
1111
+ name: Dot Accuracy@1
1112
+ - type: dot_accuracy@3
1113
+ value: 0.98
1114
+ name: Dot Accuracy@3
1115
+ - type: dot_accuracy@5
1116
+ value: 0.98
1117
+ name: Dot Accuracy@5
1118
+ - type: dot_accuracy@10
1119
+ value: 1.0
1120
+ name: Dot Accuracy@10
1121
+ - type: dot_precision@1
1122
+ value: 0.86
1123
+ name: Dot Precision@1
1124
+ - type: dot_precision@3
1125
+ value: 0.4
1126
+ name: Dot Precision@3
1127
+ - type: dot_precision@5
1128
+ value: 0.26799999999999996
1129
+ name: Dot Precision@5
1130
+ - type: dot_precision@10
1131
+ value: 0.13799999999999998
1132
+ name: Dot Precision@10
1133
+ - type: dot_recall@1
1134
+ value: 0.7373333333333332
1135
+ name: Dot Recall@1
1136
+ - type: dot_recall@3
1137
+ value: 0.9353333333333333
1138
+ name: Dot Recall@3
1139
+ - type: dot_recall@5
1140
+ value: 0.9733333333333334
1141
+ name: Dot Recall@5
1142
+ - type: dot_recall@10
1143
+ value: 0.9966666666666666
1144
+ name: Dot Recall@10
1145
+ - type: dot_ndcg@10
1146
+ value: 0.9283913808760963
1147
+ name: Dot Ndcg@10
1148
+ - type: dot_mrr@10
1149
+ value: 0.9166666666666665
1150
+ name: Dot Mrr@10
1151
+ - type: dot_map@100
1152
+ value: 0.8996944444444444
1153
+ name: Dot Map@100
1154
+ - type: query_active_dims
1155
+ value: 256.0
1156
+ name: Query Active Dims
1157
+ - type: query_sparsity_ratio
1158
+ value: 0.9375
1159
+ name: Query Sparsity Ratio
1160
+ - type: corpus_active_dims
1161
+ value: 256.0
1162
+ name: Corpus Active Dims
1163
+ - type: corpus_sparsity_ratio
1164
+ value: 0.9375
1165
+ name: Corpus Sparsity Ratio
1166
+ - task:
1167
+ type: sparse-information-retrieval
1168
+ name: Sparse Information Retrieval
1169
+ dataset:
1170
+ name: NanoSCIDOCS
1171
+ type: NanoSCIDOCS
1172
+ metrics:
1173
+ - type: dot_accuracy@1
1174
+ value: 0.54
1175
+ name: Dot Accuracy@1
1176
+ - type: dot_accuracy@3
1177
+ value: 0.76
1178
+ name: Dot Accuracy@3
1179
+ - type: dot_accuracy@5
1180
+ value: 0.82
1181
+ name: Dot Accuracy@5
1182
+ - type: dot_accuracy@10
1183
+ value: 0.86
1184
+ name: Dot Accuracy@10
1185
+ - type: dot_precision@1
1186
+ value: 0.54
1187
+ name: Dot Precision@1
1188
+ - type: dot_precision@3
1189
+ value: 0.37999999999999995
1190
+ name: Dot Precision@3
1191
+ - type: dot_precision@5
1192
+ value: 0.30400000000000005
1193
+ name: Dot Precision@5
1194
+ - type: dot_precision@10
1195
+ value: 0.204
1196
+ name: Dot Precision@10
1197
+ - type: dot_recall@1
1198
+ value: 0.11466666666666667
1199
+ name: Dot Recall@1
1200
+ - type: dot_recall@3
1201
+ value: 0.23766666666666666
1202
+ name: Dot Recall@3
1203
+ - type: dot_recall@5
1204
+ value: 0.31466666666666665
1205
+ name: Dot Recall@5
1206
+ - type: dot_recall@10
1207
+ value: 0.4196666666666665
1208
+ name: Dot Recall@10
1209
+ - type: dot_ndcg@10
1210
+ value: 0.42030245497944485
1211
+ name: Dot Ndcg@10
1212
+ - type: dot_mrr@10
1213
+ value: 0.6498333333333332
1214
+ name: Dot Mrr@10
1215
+ - type: dot_map@100
1216
+ value: 0.3374015286377059
1217
+ name: Dot Map@100
1218
+ - type: query_active_dims
1219
+ value: 256.0
1220
+ name: Query Active Dims
1221
+ - type: query_sparsity_ratio
1222
+ value: 0.9375
1223
+ name: Query Sparsity Ratio
1224
+ - type: corpus_active_dims
1225
+ value: 256.0
1226
+ name: Corpus Active Dims
1227
+ - type: corpus_sparsity_ratio
1228
+ value: 0.9375
1229
+ name: Corpus Sparsity Ratio
1230
+ - task:
1231
+ type: sparse-information-retrieval
1232
+ name: Sparse Information Retrieval
1233
+ dataset:
1234
+ name: NanoArguAna
1235
+ type: NanoArguAna
1236
+ metrics:
1237
+ - type: dot_accuracy@1
1238
+ value: 0.28
1239
+ name: Dot Accuracy@1
1240
+ - type: dot_accuracy@3
1241
+ value: 0.76
1242
+ name: Dot Accuracy@3
1243
+ - type: dot_accuracy@5
1244
+ value: 0.9
1245
+ name: Dot Accuracy@5
1246
+ - type: dot_accuracy@10
1247
+ value: 0.96
1248
+ name: Dot Accuracy@10
1249
+ - type: dot_precision@1
1250
+ value: 0.28
1251
+ name: Dot Precision@1
1252
+ - type: dot_precision@3
1253
+ value: 0.25333333333333335
1254
+ name: Dot Precision@3
1255
+ - type: dot_precision@5
1256
+ value: 0.17999999999999997
1257
+ name: Dot Precision@5
1258
+ - type: dot_precision@10
1259
+ value: 0.09599999999999997
1260
+ name: Dot Precision@10
1261
+ - type: dot_recall@1
1262
+ value: 0.28
1263
+ name: Dot Recall@1
1264
+ - type: dot_recall@3
1265
+ value: 0.76
1266
+ name: Dot Recall@3
1267
+ - type: dot_recall@5
1268
+ value: 0.9
1269
+ name: Dot Recall@5
1270
+ - type: dot_recall@10
1271
+ value: 0.96
1272
+ name: Dot Recall@10
1273
+ - type: dot_ndcg@10
1274
+ value: 0.651941051318052
1275
+ name: Dot Ndcg@10
1276
+ - type: dot_mrr@10
1277
+ value: 0.5498571428571428
1278
+ name: Dot Mrr@10
1279
+ - type: dot_map@100
1280
+ value: 0.5515326278659611
1281
+ name: Dot Map@100
1282
+ - type: query_active_dims
1283
+ value: 256.0
1284
+ name: Query Active Dims
1285
+ - type: query_sparsity_ratio
1286
+ value: 0.9375
1287
+ name: Query Sparsity Ratio
1288
+ - type: corpus_active_dims
1289
+ value: 256.0
1290
+ name: Corpus Active Dims
1291
+ - type: corpus_sparsity_ratio
1292
+ value: 0.9375
1293
+ name: Corpus Sparsity Ratio
1294
+ - task:
1295
+ type: sparse-information-retrieval
1296
+ name: Sparse Information Retrieval
1297
+ dataset:
1298
+ name: NanoSciFact
1299
+ type: NanoSciFact
1300
+ metrics:
1301
+ - type: dot_accuracy@1
1302
+ value: 0.6
1303
+ name: Dot Accuracy@1
1304
+ - type: dot_accuracy@3
1305
+ value: 0.76
1306
+ name: Dot Accuracy@3
1307
+ - type: dot_accuracy@5
1308
+ value: 0.76
1309
+ name: Dot Accuracy@5
1310
+ - type: dot_accuracy@10
1311
+ value: 0.88
1312
+ name: Dot Accuracy@10
1313
+ - type: dot_precision@1
1314
+ value: 0.6
1315
+ name: Dot Precision@1
1316
+ - type: dot_precision@3
1317
+ value: 0.2733333333333334
1318
+ name: Dot Precision@3
1319
+ - type: dot_precision@5
1320
+ value: 0.17599999999999993
1321
+ name: Dot Precision@5
1322
+ - type: dot_precision@10
1323
+ value: 0.1
1324
+ name: Dot Precision@10
1325
+ - type: dot_recall@1
1326
+ value: 0.565
1327
+ name: Dot Recall@1
1328
+ - type: dot_recall@3
1329
+ value: 0.74
1330
+ name: Dot Recall@3
1331
+ - type: dot_recall@5
1332
+ value: 0.76
1333
+ name: Dot Recall@5
1334
+ - type: dot_recall@10
1335
+ value: 0.88
1336
+ name: Dot Recall@10
1337
+ - type: dot_ndcg@10
1338
+ value: 0.7313116540920006
1339
+ name: Dot Ndcg@10
1340
+ - type: dot_mrr@10
1341
+ value: 0.6887698412698413
1342
+ name: Dot Mrr@10
1343
+ - type: dot_map@100
1344
+ value: 0.6840924219150025
1345
+ name: Dot Map@100
1346
+ - type: query_active_dims
1347
+ value: 256.0
1348
+ name: Query Active Dims
1349
+ - type: query_sparsity_ratio
1350
+ value: 0.9375
1351
+ name: Query Sparsity Ratio
1352
+ - type: corpus_active_dims
1353
+ value: 256.0
1354
+ name: Corpus Active Dims
1355
+ - type: corpus_sparsity_ratio
1356
+ value: 0.9375
1357
+ name: Corpus Sparsity Ratio
1358
+ - task:
1359
+ type: sparse-information-retrieval
1360
+ name: Sparse Information Retrieval
1361
+ dataset:
1362
+ name: NanoTouche2020
1363
+ type: NanoTouche2020
1364
+ metrics:
1365
+ - type: dot_accuracy@1
1366
+ value: 0.6326530612244898
1367
+ name: Dot Accuracy@1
1368
+ - type: dot_accuracy@3
1369
+ value: 0.8571428571428571
1370
+ name: Dot Accuracy@3
1371
+ - type: dot_accuracy@5
1372
+ value: 0.8775510204081632
1373
+ name: Dot Accuracy@5
1374
+ - type: dot_accuracy@10
1375
+ value: 0.9795918367346939
1376
+ name: Dot Accuracy@10
1377
+ - type: dot_precision@1
1378
+ value: 0.6326530612244898
1379
+ name: Dot Precision@1
1380
+ - type: dot_precision@3
1381
+ value: 0.5986394557823129
1382
+ name: Dot Precision@3
1383
+ - type: dot_precision@5
1384
+ value: 0.5265306122448979
1385
+ name: Dot Precision@5
1386
+ - type: dot_precision@10
1387
+ value: 0.4326530612244897
1388
+ name: Dot Precision@10
1389
+ - type: dot_recall@1
1390
+ value: 0.0443108966783425
1391
+ name: Dot Recall@1
1392
+ - type: dot_recall@3
1393
+ value: 0.12651297913694023
1394
+ name: Dot Recall@3
1395
+ - type: dot_recall@5
1396
+ value: 0.1807810185085916
1397
+ name: Dot Recall@5
1398
+ - type: dot_recall@10
1399
+ value: 0.2908183366162545
1400
+ name: Dot Recall@10
1401
+ - type: dot_ndcg@10
1402
+ value: 0.4946170299181126
1403
+ name: Dot Ndcg@10
1404
+ - type: dot_mrr@10
1405
+ value: 0.7585276967930031
1406
+ name: Dot Mrr@10
1407
+ - type: dot_map@100
1408
+ value: 0.3733282842478698
1409
+ name: Dot Map@100
1410
+ - type: query_active_dims
1411
+ value: 256.0
1412
+ name: Query Active Dims
1413
+ - type: query_sparsity_ratio
1414
+ value: 0.9375
1415
+ name: Query Sparsity Ratio
1416
+ - type: corpus_active_dims
1417
+ value: 256.0
1418
+ name: Corpus Active Dims
1419
+ - type: corpus_sparsity_ratio
1420
+ value: 0.9375
1421
+ name: Corpus Sparsity Ratio
1422
+ - task:
1423
+ type: sparse-nano-beir
1424
+ name: Sparse Nano BEIR
1425
+ dataset:
1426
+ name: NanoBEIR mean
1427
+ type: NanoBEIR_mean
1428
+ metrics:
1429
+ - type: dot_accuracy@1
1430
+ value: 0.5732810047095762
1431
+ name: Dot Accuracy@1
1432
+ - type: dot_accuracy@3
1433
+ value: 0.7628571428571429
1434
+ name: Dot Accuracy@3
1435
+ - type: dot_accuracy@5
1436
+ value: 0.8105808477237049
1437
+ name: Dot Accuracy@5
1438
+ - type: dot_accuracy@10
1439
+ value: 0.8707378335949765
1440
+ name: Dot Accuracy@10
1441
+ - type: dot_precision@1
1442
+ value: 0.5732810047095762
1443
+ name: Dot Precision@1
1444
+ - type: dot_precision@3
1445
+ value: 0.356305599162742
1446
+ name: Dot Precision@3
1447
+ - type: dot_precision@5
1448
+ value: 0.27281004709576134
1449
+ name: Dot Precision@5
1450
+ - type: dot_precision@10
1451
+ value: 0.1866656200941915
1452
+ name: Dot Precision@10
1453
+ - type: dot_recall@1
1454
+ value: 0.3370312131842067
1455
+ name: Dot Recall@1
1456
+ - type: dot_recall@3
1457
+ value: 0.512044128836203
1458
+ name: Dot Recall@3
1459
+ - type: dot_recall@5
1460
+ value: 0.5718216761338938
1461
+ name: Dot Recall@5
1462
+ - type: dot_recall@10
1463
+ value: 0.6465436195186451
1464
+ name: Dot Recall@10
1465
+ - type: dot_ndcg@10
1466
+ value: 0.6117808847297039
1467
+ name: Dot Ndcg@10
1468
+ - type: dot_mrr@10
1469
+ value: 0.6785680645884727
1470
+ name: Dot Mrr@10
1471
+ - type: dot_map@100
1472
+ value: 0.5323095762329995
1473
+ name: Dot Map@100
1474
+ - type: query_active_dims
1475
+ value: 256.0
1476
+ name: Query Active Dims
1477
+ - type: query_sparsity_ratio
1478
+ value: 0.9375
1479
+ name: Query Sparsity Ratio
1480
+ - type: corpus_active_dims
1481
+ value: 256.0
1482
+ name: Corpus Active Dims
1483
+ - type: corpus_sparsity_ratio
1484
+ value: 0.9375
1485
+ name: Corpus Sparsity Ratio
1486
+ ---
1487
+
1488
+ # Sparse CSR model trained on Natural Questions
1489
+
1490
+ This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
1491
+ ## Model Details
1492
+
1493
+ ### Model Description
1494
+ - **Model Type:** CSR Sparse Encoder
1495
+ - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
1496
+ - **Maximum Sequence Length:** 512 tokens
1497
+ - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
1498
+ - **Similarity Function:** Dot Product
1499
+ - **Training Dataset:**
1500
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
1501
+ - **Language:** en
1502
+ - **License:** apache-2.0
1503
+
1504
+ ### Model Sources
1505
+
1506
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
1507
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
1508
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
1509
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
1510
+
1511
+ ### Full Model Architecture
1512
+
1513
+ ```
1514
+ SparseEncoder(
1515
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
1516
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
1517
+ (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
1518
+ )
1519
+ ```
1520
+
1521
+ ## Usage
1522
+
1523
+ ### Direct Usage (Sentence Transformers)
1524
+
1525
+ First install the Sentence Transformers library:
1526
+
1527
+ ```bash
1528
+ pip install -U sentence-transformers
1529
+ ```
1530
+
1531
+ Then you can load this model and run inference.
1532
+ ```python
1533
+ from sentence_transformers import SparseEncoder
1534
+
1535
+ # Download from the 🤗 Hub
1536
+ model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-no-reconstruction")
1537
+ # Run inference
1538
+ queries = [
1539
+ "who is cornelius in the book of acts",
1540
+ ]
1541
+ documents = [
1542
+ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
1543
+ "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
1544
+ 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
1545
+ ]
1546
+ query_embeddings = model.encode_query(queries)
1547
+ document_embeddings = model.encode_document(documents)
1548
+ print(query_embeddings.shape, document_embeddings.shape)
1549
+ # [1, 4096] [3, 4096]
1550
+
1551
+ # Get the similarity scores for the embeddings
1552
+ similarities = model.similarity(query_embeddings, document_embeddings)
1553
+ print(similarities)
1554
+ # tensor([[55.6462, 14.4637, 16.8866]])
1555
+ ```
1556
+
1557
+ <!--
1558
+ ### Direct Usage (Transformers)
1559
+
1560
+ <details><summary>Click to see the direct usage in Transformers</summary>
1561
+
1562
+ </details>
1563
+ -->
1564
+
1565
+ <!--
1566
+ ### Downstream Usage (Sentence Transformers)
1567
+
1568
+ You can finetune this model on your own dataset.
1569
+
1570
+ <details><summary>Click to expand</summary>
1571
+
1572
+ </details>
1573
+ -->
1574
+
1575
+ <!--
1576
+ ### Out-of-Scope Use
1577
+
1578
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
1579
+ -->
1580
+
1581
+ ## Evaluation
1582
+
1583
+ ### Metrics
1584
+
1585
+ #### Sparse Information Retrieval
1586
+
1587
+ * Datasets: `NanoMSMARCO_128`, `NanoNFCorpus_128` and `NanoNQ_128`
1588
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
1589
+ ```json
1590
+ {
1591
+ "max_active_dims": 128
1592
+ }
1593
+ ```
1594
+
1595
+ | Metric | NanoMSMARCO_128 | NanoNFCorpus_128 | NanoNQ_128 |
1596
+ |:----------------------|:----------------|:-----------------|:-----------|
1597
+ | dot_accuracy@1 | 0.38 | 0.4 | 0.44 |
1598
+ | dot_accuracy@3 | 0.62 | 0.52 | 0.64 |
1599
+ | dot_accuracy@5 | 0.72 | 0.62 | 0.78 |
1600
+ | dot_accuracy@10 | 0.84 | 0.68 | 0.8 |
1601
+ | dot_precision@1 | 0.38 | 0.4 | 0.44 |
1602
+ | dot_precision@3 | 0.2067 | 0.34 | 0.2133 |
1603
+ | dot_precision@5 | 0.144 | 0.336 | 0.16 |
1604
+ | dot_precision@10 | 0.084 | 0.286 | 0.084 |
1605
+ | dot_recall@1 | 0.38 | 0.0266 | 0.43 |
1606
+ | dot_recall@3 | 0.62 | 0.0858 | 0.6 |
1607
+ | dot_recall@5 | 0.72 | 0.1054 | 0.73 |
1608
+ | dot_recall@10 | 0.84 | 0.1391 | 0.76 |
1609
+ | **dot_ndcg@10** | **0.6038** | **0.3316** | **0.602** |
1610
+ | dot_mrr@10 | 0.5291 | 0.4816 | 0.5625 |
1611
+ | dot_map@100 | 0.5356 | 0.1459 | 0.552 |
1612
+ | query_active_dims | 128.0 | 128.0 | 128.0 |
1613
+ | query_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 |
1614
+ | corpus_active_dims | 128.0 | 128.0 | 128.0 |
1615
+ | corpus_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 |
1616
+
1617
+ #### Sparse Nano BEIR
1618
+
1619
+ * Dataset: `NanoBEIR_mean_128`
1620
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1621
+ ```json
1622
+ {
1623
+ "dataset_names": [
1624
+ "msmarco",
1625
+ "nfcorpus",
1626
+ "nq"
1627
+ ],
1628
+ "max_active_dims": 128
1629
+ }
1630
+ ```
1631
+
1632
+ | Metric | Value |
1633
+ |:----------------------|:-----------|
1634
+ | dot_accuracy@1 | 0.4067 |
1635
+ | dot_accuracy@3 | 0.5933 |
1636
+ | dot_accuracy@5 | 0.7067 |
1637
+ | dot_accuracy@10 | 0.7733 |
1638
+ | dot_precision@1 | 0.4067 |
1639
+ | dot_precision@3 | 0.2533 |
1640
+ | dot_precision@5 | 0.2133 |
1641
+ | dot_precision@10 | 0.1513 |
1642
+ | dot_recall@1 | 0.2789 |
1643
+ | dot_recall@3 | 0.4353 |
1644
+ | dot_recall@5 | 0.5185 |
1645
+ | dot_recall@10 | 0.5797 |
1646
+ | **dot_ndcg@10** | **0.5125** |
1647
+ | dot_mrr@10 | 0.5244 |
1648
+ | dot_map@100 | 0.4112 |
1649
+ | query_active_dims | 128.0 |
1650
+ | query_sparsity_ratio | 0.9688 |
1651
+ | corpus_active_dims | 128.0 |
1652
+ | corpus_sparsity_ratio | 0.9688 |
1653
+
1654
+ #### Sparse Information Retrieval
1655
+
1656
+ * Datasets: `NanoMSMARCO_256`, `NanoNFCorpus_256` and `NanoNQ_256`
1657
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
1658
+ ```json
1659
+ {
1660
+ "max_active_dims": 256
1661
+ }
1662
+ ```
1663
+
1664
+ | Metric | NanoMSMARCO_256 | NanoNFCorpus_256 | NanoNQ_256 |
1665
+ |:----------------------|:----------------|:-----------------|:-----------|
1666
+ | dot_accuracy@1 | 0.44 | 0.42 | 0.56 |
1667
+ | dot_accuracy@3 | 0.66 | 0.54 | 0.7 |
1668
+ | dot_accuracy@5 | 0.78 | 0.58 | 0.78 |
1669
+ | dot_accuracy@10 | 0.84 | 0.7 | 0.86 |
1670
+ | dot_precision@1 | 0.44 | 0.42 | 0.56 |
1671
+ | dot_precision@3 | 0.22 | 0.36 | 0.2333 |
1672
+ | dot_precision@5 | 0.156 | 0.344 | 0.16 |
1673
+ | dot_precision@10 | 0.084 | 0.292 | 0.094 |
1674
+ | dot_recall@1 | 0.44 | 0.0188 | 0.54 |
1675
+ | dot_recall@3 | 0.66 | 0.0735 | 0.65 |
1676
+ | dot_recall@5 | 0.78 | 0.0978 | 0.73 |
1677
+ | dot_recall@10 | 0.84 | 0.1419 | 0.83 |
1678
+ | **dot_ndcg@10** | **0.6402** | **0.3365** | **0.6814** |
1679
+ | dot_mrr@10 | 0.5761 | 0.4958 | 0.6473 |
1680
+ | dot_map@100 | 0.582 | 0.1417 | 0.631 |
1681
+ | query_active_dims | 256.0 | 256.0 | 256.0 |
1682
+ | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 |
1683
+ | corpus_active_dims | 256.0 | 256.0 | 256.0 |
1684
+ | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 |
1685
+
1686
+ #### Sparse Nano BEIR
1687
+
1688
+ * Dataset: `NanoBEIR_mean_256`
1689
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1690
+ ```json
1691
+ {
1692
+ "dataset_names": [
1693
+ "msmarco",
1694
+ "nfcorpus",
1695
+ "nq"
1696
+ ],
1697
+ "max_active_dims": 256
1698
+ }
1699
+ ```
1700
+
1701
+ | Metric | Value |
1702
+ |:----------------------|:-----------|
1703
+ | dot_accuracy@1 | 0.4733 |
1704
+ | dot_accuracy@3 | 0.6333 |
1705
+ | dot_accuracy@5 | 0.7133 |
1706
+ | dot_accuracy@10 | 0.8 |
1707
+ | dot_precision@1 | 0.4733 |
1708
+ | dot_precision@3 | 0.2711 |
1709
+ | dot_precision@5 | 0.22 |
1710
+ | dot_precision@10 | 0.1567 |
1711
+ | dot_recall@1 | 0.3329 |
1712
+ | dot_recall@3 | 0.4612 |
1713
+ | dot_recall@5 | 0.5359 |
1714
+ | dot_recall@10 | 0.604 |
1715
+ | **dot_ndcg@10** | **0.5527** |
1716
+ | dot_mrr@10 | 0.5731 |
1717
+ | dot_map@100 | 0.4515 |
1718
+ | query_active_dims | 256.0 |
1719
+ | query_sparsity_ratio | 0.9375 |
1720
+ | corpus_active_dims | 256.0 |
1721
+ | corpus_sparsity_ratio | 0.9375 |
1722
+
1723
+ #### Sparse Information Retrieval
1724
+
1725
+ * Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
1726
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
1727
+
1728
+ | Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
1729
+ |:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
1730
+ | dot_accuracy@1 | 0.2 | 0.74 | 0.86 | 0.5 | 0.84 | 0.42 | 0.42 | 0.56 | 0.86 | 0.54 | 0.28 | 0.6 | 0.6327 |
1731
+ | dot_accuracy@3 | 0.52 | 0.86 | 0.98 | 0.62 | 0.92 | 0.66 | 0.54 | 0.7 | 0.98 | 0.76 | 0.76 | 0.76 | 0.8571 |
1732
+ | dot_accuracy@5 | 0.56 | 0.92 | 0.98 | 0.64 | 0.96 | 0.78 | 0.58 | 0.78 | 0.98 | 0.82 | 0.9 | 0.76 | 0.8776 |
1733
+ | dot_accuracy@10 | 0.68 | 0.94 | 0.98 | 0.68 | 0.96 | 0.84 | 0.7 | 0.86 | 1.0 | 0.86 | 0.96 | 0.88 | 0.9796 |
1734
+ | dot_precision@1 | 0.2 | 0.74 | 0.86 | 0.5 | 0.84 | 0.42 | 0.42 | 0.56 | 0.86 | 0.54 | 0.28 | 0.6 | 0.6327 |
1735
+ | dot_precision@3 | 0.1933 | 0.5867 | 0.3467 | 0.3133 | 0.4733 | 0.22 | 0.36 | 0.2333 | 0.4 | 0.38 | 0.2533 | 0.2733 | 0.5986 |
1736
+ | dot_precision@5 | 0.132 | 0.556 | 0.208 | 0.224 | 0.316 | 0.156 | 0.34 | 0.16 | 0.268 | 0.304 | 0.18 | 0.176 | 0.5265 |
1737
+ | dot_precision@10 | 0.088 | 0.484 | 0.104 | 0.138 | 0.174 | 0.084 | 0.29 | 0.094 | 0.138 | 0.204 | 0.096 | 0.1 | 0.4327 |
1738
+ | dot_recall@1 | 0.0783 | 0.0837 | 0.8067 | 0.2726 | 0.42 | 0.42 | 0.0188 | 0.54 | 0.7373 | 0.1147 | 0.28 | 0.565 | 0.0443 |
1739
+ | dot_recall@3 | 0.245 | 0.1623 | 0.9433 | 0.4129 | 0.71 | 0.66 | 0.0735 | 0.65 | 0.9353 | 0.2377 | 0.76 | 0.74 | 0.1265 |
1740
+ | dot_recall@5 | 0.2833 | 0.2214 | 0.9433 | 0.4606 | 0.79 | 0.78 | 0.0963 | 0.73 | 0.9733 | 0.3147 | 0.9 | 0.76 | 0.1808 |
1741
+ | dot_recall@10 | 0.3473 | 0.3354 | 0.9433 | 0.5501 | 0.87 | 0.84 | 0.1418 | 0.83 | 0.9967 | 0.4197 | 0.96 | 0.88 | 0.2908 |
1742
+ | **dot_ndcg@10** | **0.2733** | **0.5972** | **0.9054** | **0.4959** | **0.8027** | **0.6355** | **0.3353** | **0.6814** | **0.9284** | **0.4203** | **0.6519** | **0.7313** | **0.4946** |
1743
+ | dot_mrr@10 | 0.3666 | 0.8152 | 0.9133 | 0.5642 | 0.8857 | 0.5694 | 0.496 | 0.6473 | 0.9167 | 0.6498 | 0.5499 | 0.6888 | 0.7585 |
1744
+ | dot_map@100 | 0.2127 | 0.453 | 0.8845 | 0.4426 | 0.7335 | 0.5752 | 0.1414 | 0.6311 | 0.8997 | 0.3374 | 0.5515 | 0.6841 | 0.3733 |
1745
+ | query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
1746
+ | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
1747
+ | corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
1748
+ | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
1749
+
1750
+ #### Sparse Nano BEIR
1751
+
1752
+ * Dataset: `NanoBEIR_mean`
1753
+ * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
1754
+ ```json
1755
+ {
1756
+ "dataset_names": [
1757
+ "climatefever",
1758
+ "dbpedia",
1759
+ "fever",
1760
+ "fiqa2018",
1761
+ "hotpotqa",
1762
+ "msmarco",
1763
+ "nfcorpus",
1764
+ "nq",
1765
+ "quoraretrieval",
1766
+ "scidocs",
1767
+ "arguana",
1768
+ "scifact",
1769
+ "touche2020"
1770
+ ]
1771
+ }
1772
+ ```
1773
+
1774
+ | Metric | Value |
1775
+ |:----------------------|:-----------|
1776
+ | dot_accuracy@1 | 0.5733 |
1777
+ | dot_accuracy@3 | 0.7629 |
1778
+ | dot_accuracy@5 | 0.8106 |
1779
+ | dot_accuracy@10 | 0.8707 |
1780
+ | dot_precision@1 | 0.5733 |
1781
+ | dot_precision@3 | 0.3563 |
1782
+ | dot_precision@5 | 0.2728 |
1783
+ | dot_precision@10 | 0.1867 |
1784
+ | dot_recall@1 | 0.337 |
1785
+ | dot_recall@3 | 0.512 |
1786
+ | dot_recall@5 | 0.5718 |
1787
+ | dot_recall@10 | 0.6465 |
1788
+ | **dot_ndcg@10** | **0.6118** |
1789
+ | dot_mrr@10 | 0.6786 |
1790
+ | dot_map@100 | 0.5323 |
1791
+ | query_active_dims | 256.0 |
1792
+ | query_sparsity_ratio | 0.9375 |
1793
+ | corpus_active_dims | 256.0 |
1794
+ | corpus_sparsity_ratio | 0.9375 |
1795
+
1796
+ <!--
1797
+ ## Bias, Risks and Limitations
1798
+
1799
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
1800
+ -->
1801
+
1802
+ <!--
1803
+ ### Recommendations
1804
+
1805
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
1806
+ -->
1807
+
1808
+ ## Training Details
1809
+
1810
+ ### Training Dataset
1811
+
1812
+ #### natural-questions
1813
+
1814
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
1815
+ * Size: 99,000 training samples
1816
+ * Columns: <code>query</code> and <code>answer</code>
1817
+ * Approximate statistics based on the first 1000 samples:
1818
+ | | query | answer |
1819
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
1820
+ | type | string | string |
1821
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
1822
+ * Samples:
1823
+ | query | answer |
1824
+ |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1825
+ | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
1826
+ | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
1827
+ | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
1828
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
1829
+ ```json
1830
+ {
1831
+ "beta": 0.1,
1832
+ "gamma": 1.0,
1833
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
1834
+ }
1835
+ ```
1836
+
1837
+ ### Evaluation Dataset
1838
+
1839
+ #### natural-questions
1840
+
1841
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
1842
+ * Size: 1,000 evaluation samples
1843
+ * Columns: <code>query</code> and <code>answer</code>
1844
+ * Approximate statistics based on the first 1000 samples:
1845
+ | | query | answer |
1846
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
1847
+ | type | string | string |
1848
+ | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
1849
+ * Samples:
1850
+ | query | answer |
1851
+ |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1852
+ | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
1853
+ | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
1854
+ | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
1855
+ * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
1856
+ ```json
1857
+ {
1858
+ "beta": 0.1,
1859
+ "gamma": 1.0,
1860
+ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
1861
+ }
1862
+ ```
1863
+
1864
+ ### Training Hyperparameters
1865
+ #### Non-Default Hyperparameters
1866
+
1867
+ - `eval_strategy`: steps
1868
+ - `per_device_train_batch_size`: 64
1869
+ - `per_device_eval_batch_size`: 64
1870
+ - `learning_rate`: 4e-05
1871
+ - `num_train_epochs`: 1
1872
+ - `bf16`: True
1873
+ - `load_best_model_at_end`: True
1874
+ - `batch_sampler`: no_duplicates
1875
+
1876
+ #### All Hyperparameters
1877
+ <details><summary>Click to expand</summary>
1878
+
1879
+ - `overwrite_output_dir`: False
1880
+ - `do_predict`: False
1881
+ - `eval_strategy`: steps
1882
+ - `prediction_loss_only`: True
1883
+ - `per_device_train_batch_size`: 64
1884
+ - `per_device_eval_batch_size`: 64
1885
+ - `per_gpu_train_batch_size`: None
1886
+ - `per_gpu_eval_batch_size`: None
1887
+ - `gradient_accumulation_steps`: 1
1888
+ - `eval_accumulation_steps`: None
1889
+ - `torch_empty_cache_steps`: None
1890
+ - `learning_rate`: 4e-05
1891
+ - `weight_decay`: 0.0
1892
+ - `adam_beta1`: 0.9
1893
+ - `adam_beta2`: 0.999
1894
+ - `adam_epsilon`: 1e-08
1895
+ - `max_grad_norm`: 1.0
1896
+ - `num_train_epochs`: 1
1897
+ - `max_steps`: -1
1898
+ - `lr_scheduler_type`: linear
1899
+ - `lr_scheduler_kwargs`: {}
1900
+ - `warmup_ratio`: 0.0
1901
+ - `warmup_steps`: 0
1902
+ - `log_level`: passive
1903
+ - `log_level_replica`: warning
1904
+ - `log_on_each_node`: True
1905
+ - `logging_nan_inf_filter`: True
1906
+ - `save_safetensors`: True
1907
+ - `save_on_each_node`: False
1908
+ - `save_only_model`: False
1909
+ - `restore_callback_states_from_checkpoint`: False
1910
+ - `no_cuda`: False
1911
+ - `use_cpu`: False
1912
+ - `use_mps_device`: False
1913
+ - `seed`: 42
1914
+ - `data_seed`: None
1915
+ - `jit_mode_eval`: False
1916
+ - `use_ipex`: False
1917
+ - `bf16`: True
1918
+ - `fp16`: False
1919
+ - `fp16_opt_level`: O1
1920
+ - `half_precision_backend`: auto
1921
+ - `bf16_full_eval`: False
1922
+ - `fp16_full_eval`: False
1923
+ - `tf32`: None
1924
+ - `local_rank`: 0
1925
+ - `ddp_backend`: None
1926
+ - `tpu_num_cores`: None
1927
+ - `tpu_metrics_debug`: False
1928
+ - `debug`: []
1929
+ - `dataloader_drop_last`: False
1930
+ - `dataloader_num_workers`: 0
1931
+ - `dataloader_prefetch_factor`: None
1932
+ - `past_index`: -1
1933
+ - `disable_tqdm`: False
1934
+ - `remove_unused_columns`: True
1935
+ - `label_names`: None
1936
+ - `load_best_model_at_end`: True
1937
+ - `ignore_data_skip`: False
1938
+ - `fsdp`: []
1939
+ - `fsdp_min_num_params`: 0
1940
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
1941
+ - `fsdp_transformer_layer_cls_to_wrap`: None
1942
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
1943
+ - `deepspeed`: None
1944
+ - `label_smoothing_factor`: 0.0
1945
+ - `optim`: adamw_torch
1946
+ - `optim_args`: None
1947
+ - `adafactor`: False
1948
+ - `group_by_length`: False
1949
+ - `length_column_name`: length
1950
+ - `ddp_find_unused_parameters`: None
1951
+ - `ddp_bucket_cap_mb`: None
1952
+ - `ddp_broadcast_buffers`: False
1953
+ - `dataloader_pin_memory`: True
1954
+ - `dataloader_persistent_workers`: False
1955
+ - `skip_memory_metrics`: True
1956
+ - `use_legacy_prediction_loop`: False
1957
+ - `push_to_hub`: False
1958
+ - `resume_from_checkpoint`: None
1959
+ - `hub_model_id`: None
1960
+ - `hub_strategy`: every_save
1961
+ - `hub_private_repo`: None
1962
+ - `hub_always_push`: False
1963
+ - `gradient_checkpointing`: False
1964
+ - `gradient_checkpointing_kwargs`: None
1965
+ - `include_inputs_for_metrics`: False
1966
+ - `include_for_metrics`: []
1967
+ - `eval_do_concat_batches`: True
1968
+ - `fp16_backend`: auto
1969
+ - `push_to_hub_model_id`: None
1970
+ - `push_to_hub_organization`: None
1971
+ - `mp_parameters`:
1972
+ - `auto_find_batch_size`: False
1973
+ - `full_determinism`: False
1974
+ - `torchdynamo`: None
1975
+ - `ray_scope`: last
1976
+ - `ddp_timeout`: 1800
1977
+ - `torch_compile`: False
1978
+ - `torch_compile_backend`: None
1979
+ - `torch_compile_mode`: None
1980
+ - `include_tokens_per_second`: False
1981
+ - `include_num_input_tokens_seen`: False
1982
+ - `neftune_noise_alpha`: None
1983
+ - `optim_target_modules`: None
1984
+ - `batch_eval_metrics`: False
1985
+ - `eval_on_start`: False
1986
+ - `use_liger_kernel`: False
1987
+ - `eval_use_gather_object`: False
1988
+ - `average_tokens_across_devices`: False
1989
+ - `prompts`: None
1990
+ - `batch_sampler`: no_duplicates
1991
+ - `multi_dataset_batch_sampler`: proportional
1992
+ - `router_mapping`: {}
1993
+ - `learning_rate_mapping`: {}
1994
+
1995
+ </details>
1996
+
1997
+ ### Training Logs
1998
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_128_dot_ndcg@10 | NanoNFCorpus_128_dot_ndcg@10 | NanoNQ_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoNFCorpus_256_dot_ndcg@10 | NanoNQ_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
1999
+ |:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:|
2000
+ | -1 | -1 | - | - | 0.6253 | 0.3224 | 0.5893 | 0.5123 | 0.6112 | 0.3278 | 0.6352 | 0.5248 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2001
+ | 0.0646 | 100 | 0.0542 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2002
+ | 0.1293 | 200 | 0.0566 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2003
+ | 0.1939 | 300 | 0.0455 | 0.0390 | 0.5697 | 0.3083 | 0.6074 | 0.4952 | 0.5709 | 0.3402 | 0.6637 | 0.5249 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2004
+ | 0.2586 | 400 | 0.0445 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2005
+ | 0.3232 | 500 | 0.0463 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2006
+ | 0.3878 | 600 | 0.056 | 0.0454 | 0.5981 | 0.3334 | 0.6076 | 0.5130 | 0.6217 | 0.3417 | 0.6337 | 0.5324 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2007
+ | 0.4525 | 700 | 0.0505 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2008
+ | 0.5171 | 800 | 0.0549 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2009
+ | 0.5818 | 900 | 0.0614 | 0.0350 | 0.6058 | 0.3401 | 0.6084 | 0.5181 | 0.6293 | 0.3178 | 0.6585 | 0.5352 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2010
+ | 0.6464 | 1000 | 0.0519 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2011
+ | 0.7111 | 1100 | 0.039 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2012
+ | 0.7757 | 1200 | 0.045 | 0.0384 | 0.6045 | 0.3348 | 0.6124 | 0.5172 | 0.6227 | 0.3333 | 0.6829 | 0.5463 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2013
+ | 0.8403 | 1300 | 0.0536 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2014
+ | 0.9050 | 1400 | 0.0389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
2015
+ | **0.9696** | **1500** | **0.0413** | **0.0401** | **0.6038** | **0.3316** | **0.602** | **0.5125** | **0.6402** | **0.3365** | **0.6814** | **0.5527** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
2016
+ | -1 | -1 | - | - | - | - | - | - | - | - | - | - | 0.2733 | 0.5972 | 0.9054 | 0.4959 | 0.8027 | 0.6355 | 0.3353 | 0.6814 | 0.9284 | 0.4203 | 0.6519 | 0.7313 | 0.4946 | 0.6118 |
2017
+
2018
+ * The bold row denotes the saved checkpoint.
2019
+
2020
+ ### Environmental Impact
2021
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
2022
+ - **Energy Consumed**: 0.138 kWh
2023
+ - **Carbon Emitted**: 0.054 kg of CO2
2024
+ - **Hours Used**: 0.409 hours
2025
+
2026
+ ### Training Hardware
2027
+ - **On Cloud**: No
2028
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
2029
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
2030
+ - **RAM Size**: 31.78 GB
2031
+
2032
+ ### Framework Versions
2033
+ - Python: 3.11.6
2034
+ - Sentence Transformers: 4.2.0.dev0
2035
+ - Transformers: 4.52.4
2036
+ - PyTorch: 2.6.0+cu124
2037
+ - Accelerate: 1.5.1
2038
+ - Datasets: 2.21.0
2039
+ - Tokenizers: 0.21.1
2040
+
2041
+ ## Citation
2042
+
2043
+ ### BibTeX
2044
+
2045
+ #### Sentence Transformers
2046
+ ```bibtex
2047
+ @inproceedings{reimers-2019-sentence-bert,
2048
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
2049
+ author = "Reimers, Nils and Gurevych, Iryna",
2050
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
2051
+ month = "11",
2052
+ year = "2019",
2053
+ publisher = "Association for Computational Linguistics",
2054
+ url = "https://arxiv.org/abs/1908.10084",
2055
+ }
2056
+ ```
2057
+
2058
+ #### CSRLoss
2059
+ ```bibtex
2060
+ @misc{wen2025matryoshkarevisitingsparsecoding,
2061
+ title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
2062
+ author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
2063
+ year={2025},
2064
+ eprint={2503.01776},
2065
+ archivePrefix={arXiv},
2066
+ primaryClass={cs.LG},
2067
+ url={https://arxiv.org/abs/2503.01776},
2068
+ }
2069
+ ```
2070
+
2071
+ #### SparseMultipleNegativesRankingLoss
2072
+ ```bibtex
2073
+ @misc{henderson2017efficient,
2074
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
2075
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
2076
+ year={2017},
2077
+ eprint={1705.00652},
2078
+ archivePrefix={arXiv},
2079
+ primaryClass={cs.CL}
2080
+ }
2081
+ ```
2082
+
2083
+ <!--
2084
+ ## Glossary
2085
+
2086
+ *Clearly define terms in order to be accessible across audiences.*
2087
+ -->
2088
+
2089
+ <!--
2090
+ ## Model Card Authors
2091
+
2092
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
2093
+ -->
2094
+
2095
+ <!--
2096
+ ## Model Card Contact
2097
+
2098
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
2099
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.4",
22
+ "type_vocab_size": 2,
23
+ "use_cache": false,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.2.0.dev0",
4
+ "transformers": "4.52.4",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: ",
9
+ "document": "",
10
+ "passage": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "model_type": "SparseEncoder",
14
+ "similarity_fn_name": "dot"
15
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e86b2a89f7f8933cf7bd90586cdf69d0012140e412818234b234f807e51ee574
3
+ size 1340612432
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_CSRSparsity",
18
+ "type": "sentence_transformers.sparse_encoder.models.CSRSparsity"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff