tomaarsen HF Staff commited on
Commit
1023cb8
·
verified ·
1 Parent(s): 82c61d1

Add new SparseEncoder model

Browse files
1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": 30522
5
+ }
README.md ADDED
@@ -0,0 +1,598 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sparse-encoder
8
+ - sparse
9
+ - splade
10
+ - generated_from_trainer
11
+ - dataset_size:10000
12
+ - loss:SpladeLoss
13
+ - loss:SparseMarginMSELoss
14
+ - loss:FlopsLoss
15
+ base_model: Luyu/co-condenser-marco
16
+ widget:
17
+ - text: There are 25.15 miles from Miami to Fort Lauderdale in north direction and
18
+ 29.11 miles (46.85 kilometers) by car, following the I-95 route. Miami and Fort
19
+ Lauderdale are 31 minutes far apart, if you drive non-stop. This is the fastest
20
+ route from Miami, FL to Fort Lauderdale, FL. The halfway point is Aventura, FL.
21
+ - text: Free Universal VIN decoder to check vehicle data and history. This is a universal
22
+ VIN decoder. Every car has a unique identifier code called a VIN. This number
23
+ contains vital information about the car, such as its manufacturer, year of production,
24
+ the plant it was produced in, type of engine, model and more.
25
+ - text: Various vascular tissues in the root allow for transportation of water and
26
+ nutrients to the rest of theplant.Plant cells have a cell wall to provide support,
27
+ a large vacuole for storage of minerals, food, andchloroplasts where photosynthesis
28
+ takes place.
29
+ - text: 'The name Julia is an American baby name. In American the meaning of the name
30
+ Julia is: Youthful. Swedish Meaning: The name Julia is a Swedish baby name. In
31
+ Swedish the meaning of the name Julia is: Youth.Greek Meaning: The name Julia
32
+ is a Greek baby name. In Greek the meaning of the name Julia is: Downy. Hairy.
33
+ Derived from the clan name of Roman dictator Gaius Julius Caesar.Latin Meaning:
34
+ The name Julia is a Latin baby name.In Latin the meaning of the name Julia is:
35
+ Young. The feminine form of Julius. A character in Shakespeare''s play ''Two Gentlemen
36
+ of Verona''. Shakespearean Meaning: The name Julia is a Shakespearean baby name.he
37
+ name Julia is a Latin baby name. In Latin the meaning of the name Julia is: Young.
38
+ The feminine form of Julius. A character in Shakespeare''s play ''Two Gentlemen
39
+ of Verona''.'
40
+ - text: Usually, an LFT blood test measures the amount of bilirubin in the blood.
41
+ Bilirubin is released when red blood cells breakdown, and it is the liver that
42
+ detoxifies the bilirubin and helps to eliminate it from the body. Bilirubin is
43
+ a part of the digestive juice, bile, which the liver produces.
44
+ datasets:
45
+ - tomaarsen/msmarco-Qwen3-Reranker-0.6B
46
+ pipeline_tag: feature-extraction
47
+ library_name: sentence-transformers
48
+ metrics:
49
+ - dot_accuracy@1
50
+ - dot_accuracy@3
51
+ - dot_accuracy@5
52
+ - dot_accuracy@10
53
+ - dot_precision@1
54
+ - dot_precision@3
55
+ - dot_precision@5
56
+ - dot_precision@10
57
+ - dot_recall@1
58
+ - dot_recall@3
59
+ - dot_recall@5
60
+ - dot_recall@10
61
+ - dot_ndcg@10
62
+ - dot_mrr@10
63
+ - dot_map@100
64
+ - query_active_dims
65
+ - query_sparsity_ratio
66
+ - corpus_active_dims
67
+ - corpus_sparsity_ratio
68
+ co2_eq_emissions:
69
+ emissions: 33.256828829604686
70
+ energy_consumed: 0.08555867690314094
71
+ source: codecarbon
72
+ training_type: fine-tuning
73
+ on_cloud: false
74
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
75
+ ram_total_size: 31.777088165283203
76
+ hours_used: 0.276
77
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
78
+ model-index:
79
+ - name: splade-co-condenser-marco trained on MS MARCO hard negatives with distillation
80
+ results:
81
+ - task:
82
+ type: sparse-information-retrieval
83
+ name: Sparse Information Retrieval
84
+ dataset:
85
+ name: msmarco eval 1kq 1kd
86
+ type: msmarco-eval-1kq-1kd
87
+ metrics:
88
+ - type: dot_accuracy@1
89
+ value: 0.946
90
+ name: Dot Accuracy@1
91
+ - type: dot_accuracy@3
92
+ value: 0.981
93
+ name: Dot Accuracy@3
94
+ - type: dot_accuracy@5
95
+ value: 0.987
96
+ name: Dot Accuracy@5
97
+ - type: dot_accuracy@10
98
+ value: 0.993
99
+ name: Dot Accuracy@10
100
+ - type: dot_precision@1
101
+ value: 0.946
102
+ name: Dot Precision@1
103
+ - type: dot_precision@3
104
+ value: 0.32699999999999996
105
+ name: Dot Precision@3
106
+ - type: dot_precision@5
107
+ value: 0.19740000000000005
108
+ name: Dot Precision@5
109
+ - type: dot_precision@10
110
+ value: 0.09930000000000001
111
+ name: Dot Precision@10
112
+ - type: dot_recall@1
113
+ value: 0.946
114
+ name: Dot Recall@1
115
+ - type: dot_recall@3
116
+ value: 0.981
117
+ name: Dot Recall@3
118
+ - type: dot_recall@5
119
+ value: 0.987
120
+ name: Dot Recall@5
121
+ - type: dot_recall@10
122
+ value: 0.993
123
+ name: Dot Recall@10
124
+ - type: dot_ndcg@10
125
+ value: 0.9716808947138706
126
+ name: Dot Ndcg@10
127
+ - type: dot_mrr@10
128
+ value: 0.9646123015873015
129
+ name: Dot Mrr@10
130
+ - type: dot_map@100
131
+ value: 0.9647768153143154
132
+ name: Dot Map@100
133
+ - type: query_active_dims
134
+ value: 20.368000030517578
135
+ name: Query Active Dims
136
+ - type: query_sparsity_ratio
137
+ value: 0.9993326780672788
138
+ name: Query Sparsity Ratio
139
+ - type: corpus_active_dims
140
+ value: 105.83899688720703
141
+ name: Corpus Active Dims
142
+ - type: corpus_sparsity_ratio
143
+ value: 0.9965323701956881
144
+ name: Corpus Sparsity Ratio
145
+ ---
146
+
147
+ # splade-co-condenser-marco trained on MS MARCO hard negatives with distillation
148
+
149
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) on the [msmarco-qwen3-reranker-0.6_b](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
150
+ ## Model Details
151
+
152
+ ### Model Description
153
+ - **Model Type:** SPLADE Sparse Encoder
154
+ - **Base model:** [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) <!-- at revision e0cef0ab2410aae0f0994366ddefb5649a266709 -->
155
+ - **Maximum Sequence Length:** 256 tokens
156
+ - **Output Dimensionality:** 30522 dimensions
157
+ - **Similarity Function:** Dot Product
158
+ - **Training Dataset:**
159
+ - [msmarco-qwen3-reranker-0.6_b](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B)
160
+ - **Language:** en
161
+ - **License:** apache-2.0
162
+
163
+ ### Model Sources
164
+
165
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
166
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
167
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
168
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
169
+
170
+ ### Full Model Architecture
171
+
172
+ ```
173
+ SparseEncoder(
174
+ (0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
175
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
176
+ )
177
+ ```
178
+
179
+ ## Usage
180
+
181
+ ### Direct Usage (Sentence Transformers)
182
+
183
+ First install the Sentence Transformers library:
184
+
185
+ ```bash
186
+ pip install -U sentence-transformers
187
+ ```
188
+
189
+ Then you can load this model and run inference.
190
+ ```python
191
+ from sentence_transformers import SparseEncoder
192
+
193
+ # Download from the 🤗 Hub
194
+ model = SparseEncoder("tomaarsen/splade-co-condenser-marco-msmarco-hard-negatives-v5")
195
+ # Run inference
196
+ queries = [
197
+ "what does ly mean in a blood test",
198
+ ]
199
+ documents = [
200
+ 'According to the Hormone-Refractory Prostate Cancer Association, LY on a blood test stands for lymphocytes. The number in the results represents the percentage of lymphocytes in the white blood count. Lymphocytes should count for 15 to 46.8 percent of white blood cells. Continue Reading.',
201
+ "FROM OUR COMMUNITY. Hi Terry, The LY (Lymphocytes) in your blood test is; the type of white blood cell found in the blood and lymph systems; part of the immune system. BUN/CREAT - Bun and Creatinine are tests done to monitor kidney function. I'm sorry, but I've never heard of the other 2.",
202
+ 'FROM OUR EXPERTS. Trace lysed blood refers to a finding that is usually reported from a urinary dip stick analysis. It implies that there is a small quantity of red cells in the urine that have broken open. The developer on the dip stick reacts with the hemoglobin that is released when the red cells are lysed.',
203
+ ]
204
+ query_embeddings = model.encode_query(queries)
205
+ document_embeddings = model.encode_document(documents)
206
+ print(query_embeddings.shape, document_embeddings.shape)
207
+ # [1, 30522] [3, 30522]
208
+
209
+ # Get the similarity scores for the embeddings
210
+ similarities = model.similarity(query_embeddings, document_embeddings)
211
+ print(similarities)
212
+ # tensor([[11.6518, 11.0770, 11.5680]])
213
+ ```
214
+
215
+ <!--
216
+ ### Direct Usage (Transformers)
217
+
218
+ <details><summary>Click to see the direct usage in Transformers</summary>
219
+
220
+ </details>
221
+ -->
222
+
223
+ <!--
224
+ ### Downstream Usage (Sentence Transformers)
225
+
226
+ You can finetune this model on your own dataset.
227
+
228
+ <details><summary>Click to expand</summary>
229
+
230
+ </details>
231
+ -->
232
+
233
+ <!--
234
+ ### Out-of-Scope Use
235
+
236
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
237
+ -->
238
+
239
+ ## Evaluation
240
+
241
+ ### Metrics
242
+
243
+ #### Sparse Information Retrieval
244
+
245
+ * Dataset: `msmarco-eval-1kq-1kd`
246
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
247
+
248
+ | Metric | Value |
249
+ |:----------------------|:-----------|
250
+ | dot_accuracy@1 | 0.946 |
251
+ | dot_accuracy@3 | 0.981 |
252
+ | dot_accuracy@5 | 0.987 |
253
+ | dot_accuracy@10 | 0.993 |
254
+ | dot_precision@1 | 0.946 |
255
+ | dot_precision@3 | 0.327 |
256
+ | dot_precision@5 | 0.1974 |
257
+ | dot_precision@10 | 0.0993 |
258
+ | dot_recall@1 | 0.946 |
259
+ | dot_recall@3 | 0.981 |
260
+ | dot_recall@5 | 0.987 |
261
+ | dot_recall@10 | 0.993 |
262
+ | **dot_ndcg@10** | **0.9717** |
263
+ | dot_mrr@10 | 0.9646 |
264
+ | dot_map@100 | 0.9648 |
265
+ | query_active_dims | 20.368 |
266
+ | query_sparsity_ratio | 0.9993 |
267
+ | corpus_active_dims | 105.839 |
268
+ | corpus_sparsity_ratio | 0.9965 |
269
+
270
+ <!--
271
+ ## Bias, Risks and Limitations
272
+
273
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
274
+ -->
275
+
276
+ <!--
277
+ ### Recommendations
278
+
279
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
280
+ -->
281
+
282
+ ## Training Details
283
+
284
+ ### Training Dataset
285
+
286
+ #### msmarco-qwen3-reranker-0.6_b
287
+
288
+ * Dataset: [msmarco-qwen3-reranker-0.6_b](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B) at [20c25c8](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B/tree/20c25c858f80ba96bdb58f1558746e077001303a)
289
+ * Size: 10,000 training samples
290
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, <code>negative_5</code>, <code>negative_6</code>, <code>negative_7</code>, <code>negative_8</code>, and <code>score</code>
291
+ * Approximate statistics based on the first 1000 samples:
292
+ | | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | score |
293
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
294
+ | type | string | string | string | string | string | string | string | string | string | string | list |
295
+ | details | <ul><li>min: 4 tokens</li><li>mean: 9.18 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 80.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 68.3 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 70.27 tokens</li><li>max: 209 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 70.16 tokens</li><li>max: 241 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 71.17 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 71.8 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 72.04 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 73.0 tokens</li><li>max: 203 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 71.02 tokens</li><li>max: 206 tokens</li></ul> | <ul><li>size: 9 elements</li></ul> |
296
+ * Samples:
297
+ | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | score |
298
+ |:-------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
299
+ | <code>what is clomiphene</code> | <code>Uses of This Medicine. Clomiphene is used as a fertility medicine in some women who are unable to become pregnant. Clomiphene probably works by changing the hormone balance of the body. In women, this causes ovulation to occur and prepares the body for pregnancy.ses of This Medicine. Clomiphene is used as a fertility medicine in some women who are unable to become pregnant. Clomiphene probably works by changing the hormone balance of the body. In women, this causes ovulation to occur and prepares the body for pregnancy.</code> | <code>Clomiphene citrate, a synthetic hormone commonly used to induce or regulate ovulation, is the most often prescribed fertility pill. Brand names for clomiphene citrate include Clomid and Serophene. Clomiphene works indirectly to stimulate ovulation.</code> | <code>Occasionally, clomiphene can stimulate the ovaries too much, causing multiple eggs to be released, which can result in multiple births, such as twins or triplets (see Clomid and Twins) . Clomiphene is one of the least expensive and easiest-to-use fertility drugs. However, it will not work for all types of infertility. Your healthcare provider needs to try to find your cause of infertility before you try clomiphene.</code> | <code>Clomiphene Citrate offers two benefits to the performance enhancing athlete with one being primary. Most commonly, this SERM is used for post cycle recovery purposes; specifically to stimulate natural testosterone production that has been suppressed due to the use of anabolic steroids.</code> | <code>PCOS and ovulation problems and Clomid treatment. Clomid (clomiphene citrate or Serophene) is an oral medication that is commonly used for the treatment of infertility. It is often given to try to induce ovulation in women that do not develop and release an egg (ovulate) on their own.</code> | <code>Indication: Clomid (clomiphene citrate) is often the first choice for treating infertility, because it's effective and been used for more than 40 years.</code> | <code>Clomid Description. 1 Clomid (clomiphene citrate tablets USP) is an orally administered, nonsteroidal, ovulatory stimulant designated chemically as 2-[p-(2-chloro-1,2-diphenylvinyl)phenoxy] triethylamine citrate (1:1). It has the molecular formula of C26H28ClNO • C6H8O7 and a molecular weight of 598.09.</code> | <code>PCOS and ovulation problems and Clomid treatment. Clomid (clomiphene citrate or Serophene) is an oral medication that is commonly used for the treatment of infertility. 1 It is often given to try to induce ovulation in women that do not develop and release an egg (ovulate) on their own. Clomid is started early in the menstrual cycle and is taken for five days either from cycle days 3 through 7, or from day 5 through 9. 2 Clomid is usually started at a dose of one tablet (50mg) daily-taken any time of day.</code> | <code>Clomid is taken as a pill. This is unlike the stronger fertility drugs, which require injection. Clomid is also very effective, stimulating ovulation 80 percent of the time. Clomid may also be marketed under the name Serophene, or you may see it sold under its generic name, clomiphene citrate. Note: Clomid can also be used as a treatment for male infertility. This article focuses on Clomid treatment in women.</code> | <code>[4.75390625, 6.9375, 3.92578125, 1.0400390625, 5.61328125, ...]</code> |
300
+ | <code>typical accountant cost for it contractor</code> | <code>In the current market, we’ve seen rates as low as £50 +VAT, and as high as £180 +VAT for dedicated contractor accountants. Interestingly, the average cost of contractor accounting has not risen in line with inflation over the past decade.</code> | <code>So, how much does a contractor cost, anywhere from 5% to 25% of the total project cost, with the average ranging 10-15%.ypically the contractor' s crew will be general carpentry trades people, some who may have more specialized skills. Exactly how a general contractor charges for a project depends on the type of contract you agree to. There are three common types of cost contracts, fixed price, time & materials and cost plus a fee.</code> | <code>1 Accountants charge $150-$400 or more an hour, depending on the type of work, the size of the firm and its location. 2 You'll pay lower rates for routine work done by a less-experienced associate or lesser-trained employee, such as $30-$50 for bookkeeping services. 3 An accountant's total fee depends on the project. For a simple start-up, expect a minimum of 0.5-1.5 hours of consultation ($75-$600) to go over your business structure and basic tax issues.</code> | <code>So, how much does a contractor cost, anywhere from 5% to 25% of the total project cost, with the average ranging 10-15%.xactly how a general contractor charges for a project depends on the type of contract you agree to. There are three common types of cost contracts, fixed price, time & materials and cost plus a fee. Each contract type has pros and cons for both the consumer and for the contractor.</code> | <code>1 Accountants charge $150-$400 or more an hour, depending on the type of work, the size of the firm and its location. 2 You'll pay lower rates for routine work done by a less-experienced associate or lesser-trained employee, such as $30-$50 for bookkeeping services. 3 An accountant's total fee depends on the project.</code> | <code>average data entry keystrokes per hour salaries the average salary for data entry keystrokes per hour jobs is $ 20000</code> | <code>Accounting services are typically $250 to $400 per month, or $350 to $500 per quarter. Sales tax and bank recs included. We do all the processing, filing and tax deposits. 5 employees, bi-weekly payroll, direct deposit, $135 per month.</code> | <code>The less that is outsourced, the cheaper it will be for you. A bookkeeper should be paid between $15 and $18 per hour. An accountant with a undergraduate degree (4-years) should be paid somewhere around $20/hour but that still depends on what you're having them do. An accountant with a graduate degree (masters) should be paid between $25 and $30 per hour.</code> | <code>Pay by Experience Level for Intelligence Analyst. Median of all compensation (including tips, bonus, and overtime) by years of experience. Intelligence Analysts with a lot of experience tend to enjoy higher earnings.</code> | <code>[7.44921875, 3.271484375, 5.859375, 3.234375, 5.421875, ...]</code> |
301
+ | <code>what is mch on a blood test</code> | <code>What High Levels Mean. MCH levels in blood tests are considered high if they are 35 or higher. A normal hemoglobin level is considered to be in the range between 26 and 33 picograms per red blood cell. High MCH levels can indicate macrocytic anemia, which can be caused by insufficient vitamin B12.acrocytic RBCs are large so tend to have a higher MCH, while microcytic red cells would have a lower value.”. MCH is one of three red blood cell indices (MCHC and MCV are the other two). The measurements are done by machine and can help with diagnosis of medical problems.</code> | <code>MCH stands for mean corpuscular hemoglobin. It estimates the average amount of hemoglobin in each red blood cell, measured in picograms (a trillionth of a gram). Automated cell counters calculate the MCH, which is reported as part of a complete blood count (CBC) test. MCH may be low in iron-deficiency anemia, and may be high in anemia due to vitamin B12 or folate deficiency. Other forms of anemia can also cause MCH to be abnormal. Doctors only use the MCH as supporting information, not to make a diagnosis.</code> | <code>A. MCH stands for mean corpuscular hemoglobin. It estimates the average amount of hemoglobin in each red blood cell, measured in picograms (a trillionth of a gram). Automated cell counters calculate the MCH, which is reported as part of a complete blood count (CBC) test. MCH may be low in iron-deficiency anemia, and may be high in anemia due to vitamin B12 or folate deficiency. Other forms of anemia can also cause MCH to be abnormal.</code> | <code>The test used to determine the quantity of hemoglobin in the blood is known as the MCH blood test. The full form of MCH is Mean Corpuscular Hemoglobin. This test is therefore used to determine the average amount of hemoglobin per red blood cell in the body. The results of the MCH blood test are therefore reported in picograms, a tiny measure of weight.</code> | <code>MCH blood test high indicates that there is a poor supply of oxygen to the blood where as MCH blood test low mean that hemoglobin is too little in the cells indicating a lack of iron. It is important that iron is maintained at a certain level as too much or too little iron can be dangerous to your body.</code> | <code>slide 1 of 7. What Is MCH? MCH is the initialism for Mean Corpuscular Hemoglobin. Taken from Latin, the term refers to the average amount of hemoglobin found in red blood cells. A CBC (complete blood count) blood test can be used to monitor MCH levels in the blood. Lab Tests Online explains that the MCH aspect of a CBC test “is a measurement of the average amount of oxygen-carrying hemoglobin inside a red blood cell. Macrocytic RBCs are large so tend to have a higher MCH, while microcytic red cells would have a lower value..</code> | <code>The test used to determine the quantity of hemoglobin in the blood is known as the MCH blood test. The full form of MCH is Mean Corpuscular Hemoglobin. This test is therefore used to determine the average amount of hemoglobin per red blood cell in the body. The results of the MCH blood test are therefore reported in picograms, a tiny measure of weight. The normal range of the MCH blood test is between 26 and 33 pg per cell.</code> | <code>A MCHC test is a test that is carried out to test a person for anemia. The MCHC in a MCHC test stands for Mean Corpuscular Hemoglobin Concentration. MCHC is the calculation of the average hemoglobin inside a red blood cell. A MCHC test can be performed along with a MCV test (Mean Corpuscular Volume).Both levels are used to test people for anemia.The MCHC test is also known as the MCH blood test which tests the levels of hemoglobin in the blood. The MCHC test can be ordered as part of a complete blood count (CBC) test.CHC is measured in grams per deciliter. Normal readings for MCHC are 31 grams per deciliter to 35 grams per deciliter. A MCHC blood test may be ordered when a person is showing signs of fatigue or weakness, when there is an infection, is bleeding or bruising easily or when there is an inflammation.</code> | <code>The test looks at the average amount of hemoglobin per red cell. So MCHC = the amount of hemoglobin present in each red blood cell. A MCHC blood test could be ordered for someone who has signs of fatigue or weakness, when there is an infection, is bleeding or bruising easily or when there is noticeable inflammation.</code> | <code>[6.44921875, 7.05078125, 7.2109375, 8.40625, 6.53515625, ...]</code> |
302
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
303
+ ```json
304
+ {
305
+ "loss": "SparseMarginMSELoss",
306
+ "document_regularizer_weight": 0.08,
307
+ "query_regularizer_weight": 0.1
308
+ }
309
+ ```
310
+
311
+ ### Evaluation Dataset
312
+
313
+ #### msmarco-qwen3-reranker-0.6_b
314
+
315
+ * Dataset: [msmarco-qwen3-reranker-0.6_b](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B) at [20c25c8](https://huggingface.co/datasets/tomaarsen/msmarco-Qwen3-Reranker-0.6B/tree/20c25c858f80ba96bdb58f1558746e077001303a)
316
+ * Size: 1,000 evaluation samples
317
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, <code>negative_5</code>, <code>negative_6</code>, <code>negative_7</code>, <code>negative_8</code>, and <code>score</code>
318
+ * Approximate statistics based on the first 1000 samples:
319
+ | | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | score |
320
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
321
+ | type | string | string | string | string | string | string | string | string | string | string | list |
322
+ | details | <ul><li>min: 4 tokens</li><li>mean: 9.05 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 81.61 tokens</li><li>max: 244 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 69.2 tokens</li><li>max: 231 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 68.76 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 70.99 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 70.7 tokens</li><li>max: 236 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 72.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 68.95 tokens</li><li>max: 203 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 71.68 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 70.18 tokens</li><li>max: 213 tokens</li></ul> | <ul><li>size: 9 elements</li></ul> |
323
+ * Samples:
324
+ | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | negative_6 | negative_7 | negative_8 | score |
325
+ |:-------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
326
+ | <code>how many people employed by shell</code> | <code>Shell worldwide. Royal Dutch Shell was formed in 1907, although our history dates back to the early 19th century, to a small shop in London where the Samuel family sold sea shells. Today, Shell is one of the world’s major energy companies, employing an average of 93,000 people and operating in more than 70 countries. Our headquarters are in The Hague, the Netherlands, and our Chief Executive Officer is Ben van Beurden.</code> | <code>Show sources information. This statistic shows the number of employees at SeaWorld Entertainment, Inc. in the United States, by type. As of December 2016, SeaWorld employed 5,000 full-time employees and counted approximately 13,000 seasonal employees during their peak operating season.</code> | <code>Jobs, companies, people, and articles for LinkedIn’s Payroll Specialist - Addus Homecare, Inc. members. Insights about Payroll Specialist - Addus Homecare, Inc. members on LinkedIn. Median salary $31,300.</code> | <code>As of July 2014, there are 139 million people employed in the United States. This number is up by 209,000 employees from June and by 1.47 million from the beginning of 2014.</code> | <code>average data entry keystrokes per hour salaries the average salary for data entry keystrokes per hour jobs is $ 20000</code> | <code>Research and review Plano Synergy jobs. Learn more about a career with Plano Synergy including all recent jobs, hiring trends, salaries, work environment and more. Find Jobs Company Reviews Find Salaries Find Resumes Employers / Post Job Upload your resume Sign in</code> | <code>From millions of real job salary data. 13 Customer Support Specialist salary data. Average Customer Support Specialist salary is $59,032 Detailed Customer Support Specialist starting salary, median salary, pay scale, bonus data report Register & Know how much $ you can earn | Sign In</code> | <code>From millions of real job salary data. 1 Ceo Ally salary data. Average Ceo Ally salary is $55,000 Detailed Ceo Ally starting salary, median salary, pay scale, bonus data report</code> | <code>HelpSystems benefits and perks, including insurance benefits, retirement benefits, and vacation policy. Reported anonymously by HelpSystems employees. Glassdoor uses cookies to improve your site experience.</code> | <code>[6.265625, -1.3671875, -6.91796875, 1.111328125, -7.96875, ...]</code> |
327
+ | <code>what is a lcsw</code> | <code>LCSW is an acronym for licensed clinical social worker, and people with this title are skilled professionals who meet certain requirements and work in a variety of fields. The term social worker is not always synonymous with licensed clinical social worker.</code> | <code>LISW means the person is a Licensed Independent Social Worker. LCSW means the person is a Licensed Clinical Social Worker. Source(s): Introduction to Social Work 101 at University of Nevada, Las Vega (UNLV) Dorothy K. · 1 decade ago.</code> | <code>An LCSW is a licensed clinical social worker. A LMHC is the newest addition to the field of mental health. They are highly similar and can do most of the same things with few exceptions. One thing to keep in mind is that because the LMHC lincense is so new, there are fewer in number in the field.n LCSW is a licensed clinical social worker. A LMHC is the newest addition to the field of mental health. They are highly similar and can do most of the same things with few exceptions. One thing to keep in mind is that because the LMHC lincense is so new, there are fewer in number in the field.</code> | <code>The Licensed Clinical Social Worker or LCSW, is a sub-sector within the field of Social Work. They work with clients in order to help them deal with issues involving their mental and emotional health. This could be related to substance abuse, past trauma or mental illness.</code> | <code>Licensed Clinical Social Worker | LCSW. The Licensed Clinical Social Worker or LCSW, is a sub-sector within the field of Social Work. LCSW's work with clients in order to help deal with issues involving mental and emotional health. There are a wide variety of specializations the Licensed Clinical Social Worker can focus on.</code> | <code>The LMSW exam is a computer-based test containing 170 multiple-choice questions designed to measure minimum competencies in four categories of social work practice: Human development, diversity, and behavior in the environment. Assessment and intervention planning.</code> | <code>The Licensed Clinical Social Worker, also known as the LCSW, is a branch of social work that specializes in mental health therapy in a counseling format. Becoming an LCSW requires a significant degree of training, including having earned a Master of Social Work (MSW) degree from a Council on Social Work Education (CSWE) accredited program.</code> | <code>a. The examination requirements for licensure as an LCSW include passing the Clinical Examination of the ASWB or the Clinical Social Workers Examination of the State of California. Scope of practice-Limitations. a.To the extent they are prepared through education and training, an LCSW can engage in all acts and practices defined as the practice of clinical social work. Certified Social Work (CSW): CSW means a licensed certified social worker. A CSW must have a master s degree.</code> | <code>The LTCM Client is a way for companies to stay in touch with you, their customers, in a way that is unobtrusive and completely under the users' control. It's an application that runs quietly on the computer. Users can and should customize the client to match their desired preferences.</code> | <code>[7.34375, 6.046875, 7.09765625, 6.46484375, 7.28515625, ...]</code> |
328
+ | <code>does oolong tea have much caffeine?</code> | <code>At a given weight, tea contains more caffeine than coffee, but this doesn’t mean that a usual portion of tea contains more caffeine than coffee because tea is usually brewed in a weak way. Some kinds of tea, such as oolong and black tea, contain higher level of caffeine than most other teas. Among six basic teas (green, black, yellow, white, oolong, dark), green tea contains less caffeine than black tea and white tea contains less than green tea. But many studies found that the caffeine content varies more among individual teas than it does among broad categories.</code> | <code>Actually, oolong tea has less caffeine than coffee and black tea. A cup of oolong tea only has about 1/3 of caffeine of a cup of coffee. According to a research conducted by HICKS M.B, the caffeine decreases whenever the tea leaves go through the process of brewing.</code> | <code>Oolong tea contains caffeine. Caffeine works by stimulating the central nervous system (CNS), heart, and muscles. Oolong tea also contains theophylline and theobromine, which are chemicals similar to caffeine. Too much oolong tea, more than five cups per day, can cause side effects because of the caffeine.</code> | <code>Oolong tea, made from more mature leaves, usually have less caffeine than green tea. On the flip side, mature leaves contain less theanine, a sweet, natural relaxant that makes a tea much less caffeinated than it actually is. That is the theory, anyway.</code> | <code>Oolong tea is a product made from the leaves, buds, and stems of the Camellia sinensis plant. This is the same plant that is also used to make black tea and green tea. The difference is in the processing.Oolong tea is partially fermented, black tea is fully fermented, and green tea is unfermented. Oolong tea is used to sharpen thinking skills and improve mental alertness. It is also used to prevent cancer, tooth decay, osteoporosis, and heart disease.owever, do not drink more than 2 cups a day of oolong tea. That amount of tea contains about 200 mg of caffeine. Too much caffeine during pregnancy might cause premature delivery, low birth weight, and harm to the baby.</code> | <code>A Department of Nutritional Services report provides the following ranges of caffeine content for a cup of tea made with loose leaves: 1 Black Tea: 23 - 110 mg. 2 Oolong Tea: 12 - 55 mg. Green Tea: 8 - 36 mg.</code> | <code>Oolong tea is a product made from the leaves, buds, and stems of the Camellia sinensis plant. This is the same plant that is also used to make black tea and green tea. The difference is in the processing. Oolong tea is partially fermented, black tea is fully fermented, and green tea is unfermented. Oolong tea is used to sharpen thinking skills and improve mental alertness. It is also used to prevent cancer, tooth decay, osteoporosis, and heart disease.</code> | <code>Health Effects of Tea – Caffeine. In dry form, a kilogram of black tea has twice the caffeine as a kilogram of coffee…. But one kilogram of black tea makes about 450 cups of tea and one kilogram of coffee makes about 100 cups of coffee, so…. There is less caffeine in a cup of tea than in a cup of coffee. Green teas have less caffeine than black teas, and white teas have even less caffeine than green teas. Oolong teas fall between black and green teas. Herbal tea, because it is not made from the same tea plant, is caffeine-free, naturally. Here is a graphical representation of their respective caffeine content.</code> | <code>The average 8-ounce serving of brewed black tea contains 14 to 70 mg of caffeine. This compares to 24 to 45 mg of caffeine found in green tea. An 8-ounce glass of instant iced tea prepared with water contains 11 to 47 mg of caffeine. Most ready-to-drink bottled teas contain 5 to 40 mg of caffeine. Just as with coffee, decaffeinated tea still contains 5 to 10 mg of caffeine per cup.</code> | <code>[7.60546875, 8.78125, 9.109375, 8.609375, 7.984375, ...]</code> |
329
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
330
+ ```json
331
+ {
332
+ "loss": "SparseMarginMSELoss",
333
+ "document_regularizer_weight": 0.08,
334
+ "query_regularizer_weight": 0.1
335
+ }
336
+ ```
337
+
338
+ ### Training Hyperparameters
339
+ #### Non-Default Hyperparameters
340
+
341
+ - `eval_strategy`: steps
342
+ - `learning_rate`: 4e-05
343
+ - `num_train_epochs`: 1
344
+ - `warmup_ratio`: 0.1
345
+ - `bf16`: True
346
+ - `load_best_model_at_end`: True
347
+
348
+ #### All Hyperparameters
349
+ <details><summary>Click to expand</summary>
350
+
351
+ - `overwrite_output_dir`: False
352
+ - `do_predict`: False
353
+ - `eval_strategy`: steps
354
+ - `prediction_loss_only`: True
355
+ - `per_device_train_batch_size`: 8
356
+ - `per_device_eval_batch_size`: 8
357
+ - `per_gpu_train_batch_size`: None
358
+ - `per_gpu_eval_batch_size`: None
359
+ - `gradient_accumulation_steps`: 1
360
+ - `eval_accumulation_steps`: None
361
+ - `torch_empty_cache_steps`: None
362
+ - `learning_rate`: 4e-05
363
+ - `weight_decay`: 0.0
364
+ - `adam_beta1`: 0.9
365
+ - `adam_beta2`: 0.999
366
+ - `adam_epsilon`: 1e-08
367
+ - `max_grad_norm`: 1.0
368
+ - `num_train_epochs`: 1
369
+ - `max_steps`: -1
370
+ - `lr_scheduler_type`: linear
371
+ - `lr_scheduler_kwargs`: {}
372
+ - `warmup_ratio`: 0.1
373
+ - `warmup_steps`: 0
374
+ - `log_level`: passive
375
+ - `log_level_replica`: warning
376
+ - `log_on_each_node`: True
377
+ - `logging_nan_inf_filter`: True
378
+ - `save_safetensors`: True
379
+ - `save_on_each_node`: False
380
+ - `save_only_model`: False
381
+ - `restore_callback_states_from_checkpoint`: False
382
+ - `no_cuda`: False
383
+ - `use_cpu`: False
384
+ - `use_mps_device`: False
385
+ - `seed`: 42
386
+ - `data_seed`: None
387
+ - `jit_mode_eval`: False
388
+ - `use_ipex`: False
389
+ - `bf16`: True
390
+ - `fp16`: False
391
+ - `fp16_opt_level`: O1
392
+ - `half_precision_backend`: auto
393
+ - `bf16_full_eval`: False
394
+ - `fp16_full_eval`: False
395
+ - `tf32`: None
396
+ - `local_rank`: 0
397
+ - `ddp_backend`: None
398
+ - `tpu_num_cores`: None
399
+ - `tpu_metrics_debug`: False
400
+ - `debug`: []
401
+ - `dataloader_drop_last`: False
402
+ - `dataloader_num_workers`: 0
403
+ - `dataloader_prefetch_factor`: None
404
+ - `past_index`: -1
405
+ - `disable_tqdm`: False
406
+ - `remove_unused_columns`: True
407
+ - `label_names`: None
408
+ - `load_best_model_at_end`: True
409
+ - `ignore_data_skip`: False
410
+ - `fsdp`: []
411
+ - `fsdp_min_num_params`: 0
412
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
413
+ - `fsdp_transformer_layer_cls_to_wrap`: None
414
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
415
+ - `deepspeed`: None
416
+ - `label_smoothing_factor`: 0.0
417
+ - `optim`: adamw_torch
418
+ - `optim_args`: None
419
+ - `adafactor`: False
420
+ - `group_by_length`: False
421
+ - `length_column_name`: length
422
+ - `ddp_find_unused_parameters`: None
423
+ - `ddp_bucket_cap_mb`: None
424
+ - `ddp_broadcast_buffers`: False
425
+ - `dataloader_pin_memory`: True
426
+ - `dataloader_persistent_workers`: False
427
+ - `skip_memory_metrics`: True
428
+ - `use_legacy_prediction_loop`: False
429
+ - `push_to_hub`: False
430
+ - `resume_from_checkpoint`: None
431
+ - `hub_model_id`: None
432
+ - `hub_strategy`: every_save
433
+ - `hub_private_repo`: None
434
+ - `hub_always_push`: False
435
+ - `hub_revision`: None
436
+ - `gradient_checkpointing`: False
437
+ - `gradient_checkpointing_kwargs`: None
438
+ - `include_inputs_for_metrics`: False
439
+ - `include_for_metrics`: []
440
+ - `eval_do_concat_batches`: True
441
+ - `fp16_backend`: auto
442
+ - `push_to_hub_model_id`: None
443
+ - `push_to_hub_organization`: None
444
+ - `mp_parameters`:
445
+ - `auto_find_batch_size`: False
446
+ - `full_determinism`: False
447
+ - `torchdynamo`: None
448
+ - `ray_scope`: last
449
+ - `ddp_timeout`: 1800
450
+ - `torch_compile`: False
451
+ - `torch_compile_backend`: None
452
+ - `torch_compile_mode`: None
453
+ - `include_tokens_per_second`: False
454
+ - `include_num_input_tokens_seen`: False
455
+ - `neftune_noise_alpha`: None
456
+ - `optim_target_modules`: None
457
+ - `batch_eval_metrics`: False
458
+ - `eval_on_start`: False
459
+ - `use_liger_kernel`: False
460
+ - `liger_kernel_config`: None
461
+ - `eval_use_gather_object`: False
462
+ - `average_tokens_across_devices`: False
463
+ - `prompts`: None
464
+ - `batch_sampler`: batch_sampler
465
+ - `multi_dataset_batch_sampler`: proportional
466
+ - `router_mapping`: {}
467
+ - `learning_rate_mapping`: {}
468
+
469
+ </details>
470
+
471
+ ### Training Logs
472
+ | Epoch | Step | Training Loss | Validation Loss | msmarco-eval-1kq-1kd_dot_ndcg@10 |
473
+ |:--------:|:--------:|:-------------:|:---------------:|:--------------------------------:|
474
+ | 0.032 | 40 | 528227.75 | - | - |
475
+ | 0.064 | 80 | 344.8533 | - | - |
476
+ | 0.096 | 120 | 37.9373 | - | - |
477
+ | 0.128 | 160 | 26.9242 | - | - |
478
+ | 0.16 | 200 | 22.3622 | 29.6919 | 0.8848 |
479
+ | 0.192 | 240 | 20.5145 | - | - |
480
+ | 0.224 | 280 | 17.9177 | - | - |
481
+ | 0.256 | 320 | 17.8416 | - | - |
482
+ | 0.288 | 360 | 19.0142 | - | - |
483
+ | 0.32 | 400 | 19.0727 | 18.2416 | 0.9238 |
484
+ | 0.352 | 440 | 18.5064 | - | - |
485
+ | 0.384 | 480 | 18.1284 | - | - |
486
+ | 0.416 | 520 | 16.5013 | - | - |
487
+ | 0.448 | 560 | 15.044 | - | - |
488
+ | 0.48 | 600 | 17.1579 | 17.8858 | 0.9629 |
489
+ | 0.512 | 640 | 16.7288 | - | - |
490
+ | 0.544 | 680 | 15.5561 | - | - |
491
+ | 0.576 | 720 | 15.8044 | - | - |
492
+ | 0.608 | 760 | 14.5402 | - | - |
493
+ | 0.64 | 800 | 15.8633 | 15.2570 | 0.9569 |
494
+ | 0.672 | 840 | 16.6143 | - | - |
495
+ | 0.704 | 880 | 15.974 | - | - |
496
+ | 0.736 | 920 | 13.0052 | - | - |
497
+ | 0.768 | 960 | 13.4638 | - | - |
498
+ | 0.8 | 1000 | 14.8979 | 14.2573 | 0.9615 |
499
+ | 0.832 | 1040 | 13.8599 | - | - |
500
+ | 0.864 | 1080 | 14.5366 | - | - |
501
+ | 0.896 | 1120 | 14.2389 | - | - |
502
+ | 0.928 | 1160 | 12.3206 | - | - |
503
+ | **0.96** | **1200** | **13.0217** | **14.1264** | **0.9717** |
504
+ | 0.992 | 1240 | 12.7925 | - | - |
505
+ | -1 | -1 | - | - | 0.9717 |
506
+
507
+ * The bold row denotes the saved checkpoint.
508
+
509
+ ### Environmental Impact
510
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
511
+ - **Energy Consumed**: 0.086 kWh
512
+ - **Carbon Emitted**: 0.033 kg of CO2
513
+ - **Hours Used**: 0.276 hours
514
+
515
+ ### Training Hardware
516
+ - **On Cloud**: No
517
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
518
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
519
+ - **RAM Size**: 31.78 GB
520
+
521
+ ### Framework Versions
522
+ - Python: 3.11.6
523
+ - Sentence Transformers: 5.0.0
524
+ - Transformers: 4.55.0.dev0
525
+ - PyTorch: 2.7.1+cu126
526
+ - Accelerate: 1.6.0
527
+ - Datasets: 3.6.0
528
+ - Tokenizers: 0.21.1
529
+
530
+ ## Citation
531
+
532
+ ### BibTeX
533
+
534
+ #### Sentence Transformers
535
+ ```bibtex
536
+ @inproceedings{reimers-2019-sentence-bert,
537
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
538
+ author = "Reimers, Nils and Gurevych, Iryna",
539
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
540
+ month = "11",
541
+ year = "2019",
542
+ publisher = "Association for Computational Linguistics",
543
+ url = "https://arxiv.org/abs/1908.10084",
544
+ }
545
+ ```
546
+
547
+ #### SpladeLoss
548
+ ```bibtex
549
+ @misc{formal2022distillationhardnegativesampling,
550
+ title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
551
+ author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
552
+ year={2022},
553
+ eprint={2205.04733},
554
+ archivePrefix={arXiv},
555
+ primaryClass={cs.IR},
556
+ url={https://arxiv.org/abs/2205.04733},
557
+ }
558
+ ```
559
+
560
+ #### SparseMarginMSELoss
561
+ ```bibtex
562
+ @misc{hofstätter2021improving,
563
+ title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
564
+ author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
565
+ year={2021},
566
+ eprint={2010.02666},
567
+ archivePrefix={arXiv},
568
+ primaryClass={cs.IR}
569
+ }
570
+ ```
571
+
572
+ #### FlopsLoss
573
+ ```bibtex
574
+ @article{paria2020minimizing,
575
+ title={Minimizing flops to learn efficient sparse representations},
576
+ author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
577
+ journal={arXiv preprint arXiv:2004.05665},
578
+ year={2020}
579
+ }
580
+ ```
581
+
582
+ <!--
583
+ ## Glossary
584
+
585
+ *Clearly define terms in order to be accessible across audiences.*
586
+ -->
587
+
588
+ <!--
589
+ ## Model Card Authors
590
+
591
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
592
+ -->
593
+
594
+ <!--
595
+ ## Model Card Contact
596
+
597
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
598
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.55.0.dev0",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.55.0.dev0",
6
+ "pytorch": "2.7.1+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56d5fe4e4926d991591c873996de3167476a65c4901e73348389d41d1d2720a5
3
+ size 438080896
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff