CharlesPing commited on
Commit
61dcad6
·
verified ·
1 Parent(s): 5424563

Add new CrossEncoder model

Browse files
Files changed (3) hide show
  1. README.md +256 -35
  2. config.json +1 -1
  3. model.safetensors +2 -2
README.md CHANGED
@@ -1,24 +1,49 @@
1
  ---
2
  tags:
3
  - sentence-transformers
4
- - sentence-similarity
5
- - feature-extraction
6
- pipeline_tag: sentence-similarity
 
 
 
7
  library_name: sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- # SentenceTransformer
11
 
12
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
13
 
14
  ## Model Details
15
 
16
  ### Model Description
17
- - **Model Type:** Sentence Transformer
18
- <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
19
  - **Maximum Sequence Length:** 512 tokens
20
- - **Output Dimensionality:** 384 dimensions
21
- - **Similarity Function:** Cosine Similarity
22
  <!-- - **Training Dataset:** Unknown -->
23
  <!-- - **Language:** Unknown -->
24
  <!-- - **License:** Unknown -->
@@ -26,17 +51,9 @@ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps
26
  ### Model Sources
27
 
28
  - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
 
29
  - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
30
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
31
-
32
- ### Full Model Architecture
33
-
34
- ```
35
- SentenceTransformer(
36
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
37
- (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
38
- )
39
- ```
40
 
41
  ## Usage
42
 
@@ -50,24 +67,34 @@ pip install -U sentence-transformers
50
 
51
  Then you can load this model and run inference.
52
  ```python
53
- from sentence_transformers import SentenceTransformer
54
 
55
  # Download from the 🤗 Hub
56
- model = SentenceTransformer("CharlesPing/finetuned-cross-encoder-l6-v2")
57
- # Run inference
58
- sentences = [
59
- 'The weather is lovely today.',
60
- "It's so sunny outside!",
61
- 'He drove to the stadium.',
 
 
62
  ]
63
- embeddings = model.encode(sentences)
64
- print(embeddings.shape)
65
- # [3, 384]
66
-
67
- # Get the similarity scores for the embeddings
68
- similarities = model.similarity(embeddings, embeddings)
69
- print(similarities.shape)
70
- # [3, 3]
 
 
 
 
 
 
 
 
71
  ```
72
 
73
  <!--
@@ -94,6 +121,26 @@ You can finetune this model on your own dataset.
94
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
95
  -->
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  <!--
98
  ## Bias, Risks and Limitations
99
 
@@ -108,19 +155,193 @@ You can finetune this model on your own dataset.
108
 
109
  ## Training Details
110
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  ### Framework Versions
112
  - Python: 3.11.12
113
  - Sentence Transformers: 4.1.0
114
  - Transformers: 4.51.3
115
  - PyTorch: 2.6.0+cu124
116
  - Accelerate: 1.6.0
117
- - Datasets: 2.14.4
118
  - Tokenizers: 0.21.1
119
 
120
  ## Citation
121
 
122
  ### BibTeX
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  <!--
125
  ## Glossary
126
 
 
1
  ---
2
  tags:
3
  - sentence-transformers
4
+ - cross-encoder
5
+ - generated_from_trainer
6
+ - dataset_size:22258
7
+ - loss:FitMixinLoss
8
+ base_model: cross-encoder/ms-marco-MiniLM-L6-v2
9
+ pipeline_tag: text-ranking
10
  library_name: sentence-transformers
11
+ metrics:
12
+ - map
13
+ - mrr@10
14
+ - ndcg@10
15
+ model-index:
16
+ - name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
17
+ results:
18
+ - task:
19
+ type: cross-encoder-reranking
20
+ name: Cross Encoder Reranking
21
+ dataset:
22
+ name: cross rerank dev mixed neg
23
+ type: cross-rerank-dev-mixed-neg
24
+ metrics:
25
+ - type: map
26
+ value: 0.4873053613053613
27
+ name: Map
28
+ - type: mrr@10
29
+ value: 0.48394871794871797
30
+ name: Mrr@10
31
+ - type: ndcg@10
32
+ value: 0.5970778430138177
33
+ name: Ndcg@10
34
  ---
35
 
36
+ # CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
37
 
38
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
39
 
40
  ## Model Details
41
 
42
  ### Model Description
43
+ - **Model Type:** Cross Encoder
44
+ - **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision ce0834f22110de6d9222af7a7a03628121708969 -->
45
  - **Maximum Sequence Length:** 512 tokens
46
+ - **Number of Output Labels:** 1 label
 
47
  <!-- - **Training Dataset:** Unknown -->
48
  <!-- - **Language:** Unknown -->
49
  <!-- - **License:** Unknown -->
 
51
  ### Model Sources
52
 
53
  - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
54
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
55
  - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
56
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
 
 
 
 
 
 
 
 
 
57
 
58
  ## Usage
59
 
 
67
 
68
  Then you can load this model and run inference.
69
  ```python
70
+ from sentence_transformers import CrossEncoder
71
 
72
  # Download from the 🤗 Hub
73
+ model = CrossEncoder("CharlesPing/finetuned-cross-encoder-l6-v2")
74
+ # Get scores for pairs of texts
75
+ pairs = [
76
+ ['‘Getting hung up on the exact nature of the records is interesting, and there’s lots of technical work that can be done there, but the main take-home response there is that the trends we’ve been seeing since the 1970s are continuing and have not paused in any way,’ he said.”', 'Rosenzweig also criticized the "waffling—encouraged by the NPOV policy—[which] means that it is hard to discern any overall interpretive stance in Wikipedia history".'],
77
+ ['After the 9/11 terrorist attacks grounded commercial air traffic, "there was a temperature drop while the airplanes weren\'t flying, for the week afterwards."', 'Play media At 9:42\xa0a.m., the Federal Aviation Administration (FAA) grounded all civilian aircraft within the continental U.S., and civilian aircraft already in flight were told to land immediately.'],
78
+ ['But the central message of the IPCC AR4, is confirmed by the peer reviewed literature.', 'Scientific consensus is normally achieved through communication at conferences, publication in the scientific literature, replication (reproducible results by others), and peer review.'],
79
+ ['"Many people think the science of climate change is settled.', 'During his administration, the bridge from Filadelfia and Liberia was constructed, as was the Old National Theater.'],
80
+ ['“Even if you could calculate some sort of meaningful global temperature statistic, the figure would be unimportant.', 'Quantitative information or data is based on quantities obtained using a quantifiable measurement process.'],
81
  ]
82
+ scores = model.predict(pairs)
83
+ print(scores.shape)
84
+ # (5,)
85
+
86
+ # Or rank different texts based on similarity to a single text
87
+ ranks = model.rank(
88
+ '‘Getting hung up on the exact nature of the records is interesting, and there’s lots of technical work that can be done there, but the main take-home response there is that the trends we’ve been seeing since the 1970s are continuing and have not paused in any way,’ he said.”',
89
+ [
90
+ 'Rosenzweig also criticized the "waffling—encouraged by the NPOV policy—[which] means that it is hard to discern any overall interpretive stance in Wikipedia history".',
91
+ 'Play media At 9:42\xa0a.m., the Federal Aviation Administration (FAA) grounded all civilian aircraft within the continental U.S., and civilian aircraft already in flight were told to land immediately.',
92
+ 'Scientific consensus is normally achieved through communication at conferences, publication in the scientific literature, replication (reproducible results by others), and peer review.',
93
+ 'During his administration, the bridge from Filadelfia and Liberia was constructed, as was the Old National Theater.',
94
+ 'Quantitative information or data is based on quantities obtained using a quantifiable measurement process.',
95
+ ]
96
+ )
97
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
98
  ```
99
 
100
  <!--
 
121
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
122
  -->
123
 
124
+ ## Evaluation
125
+
126
+ ### Metrics
127
+
128
+ #### Cross Encoder Reranking
129
+
130
+ * Dataset: `cross-rerank-dev-mixed-neg`
131
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
132
+ ```json
133
+ {
134
+ "at_k": 10
135
+ }
136
+ ```
137
+
138
+ | Metric | Value |
139
+ |:------------|:-----------|
140
+ | map | 0.4873 |
141
+ | mrr@10 | 0.4839 |
142
+ | **ndcg@10** | **0.5971** |
143
+
144
  <!--
145
  ## Bias, Risks and Limitations
146
 
 
155
 
156
  ## Training Details
157
 
158
+ ### Training Dataset
159
+
160
+ #### Unnamed Dataset
161
+
162
+ * Size: 22,258 training samples
163
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
164
+ * Approximate statistics based on the first 1000 samples:
165
+ | | sentence_0 | sentence_1 | label |
166
+ |:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
167
+ | type | string | string | float |
168
+ | details | <ul><li>min: 26 characters</li><li>mean: 121.91 characters</li><li>max: 319 characters</li></ul> | <ul><li>min: 36 characters</li><li>mean: 140.85 characters</li><li>max: 573 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.16</li><li>max: 1.0</li></ul> |
169
+ * Samples:
170
+ | sentence_0 | sentence_1 | label |
171
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
172
+ | <code>‘Getting hung up on the exact nature of the records is interesting, and there’s lots of technical work that can be done there, but the main take-home response there is that the trends we’ve been seeing since the 1970s are continuing and have not paused in any way,’ he said.”</code> | <code>Rosenzweig also criticized the "waffling—encouraged by the NPOV policy—[which] means that it is hard to discern any overall interpretive stance in Wikipedia history".</code> | <code>1.0</code> |
173
+ | <code>After the 9/11 terrorist attacks grounded commercial air traffic, "there was a temperature drop while the airplanes weren't flying, for the week afterwards."</code> | <code>Play media At 9:42 a.m., the Federal Aviation Administration (FAA) grounded all civilian aircraft within the continental U.S., and civilian aircraft already in flight were told to land immediately.</code> | <code>1.0</code> |
174
+ | <code>But the central message of the IPCC AR4, is confirmed by the peer reviewed literature.</code> | <code>Scientific consensus is normally achieved through communication at conferences, publication in the scientific literature, replication (reproducible results by others), and peer review.</code> | <code>1.0</code> |
175
+ * Loss: [<code>FitMixinLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#fitmixinloss)
176
+
177
+ ### Training Hyperparameters
178
+ #### Non-Default Hyperparameters
179
+
180
+ - `eval_strategy`: steps
181
+ - `per_device_train_batch_size`: 16
182
+ - `per_device_eval_batch_size`: 16
183
+
184
+ #### All Hyperparameters
185
+ <details><summary>Click to expand</summary>
186
+
187
+ - `overwrite_output_dir`: False
188
+ - `do_predict`: False
189
+ - `eval_strategy`: steps
190
+ - `prediction_loss_only`: True
191
+ - `per_device_train_batch_size`: 16
192
+ - `per_device_eval_batch_size`: 16
193
+ - `per_gpu_train_batch_size`: None
194
+ - `per_gpu_eval_batch_size`: None
195
+ - `gradient_accumulation_steps`: 1
196
+ - `eval_accumulation_steps`: None
197
+ - `torch_empty_cache_steps`: None
198
+ - `learning_rate`: 5e-05
199
+ - `weight_decay`: 0.0
200
+ - `adam_beta1`: 0.9
201
+ - `adam_beta2`: 0.999
202
+ - `adam_epsilon`: 1e-08
203
+ - `max_grad_norm`: 1
204
+ - `num_train_epochs`: 3
205
+ - `max_steps`: -1
206
+ - `lr_scheduler_type`: linear
207
+ - `lr_scheduler_kwargs`: {}
208
+ - `warmup_ratio`: 0.0
209
+ - `warmup_steps`: 0
210
+ - `log_level`: passive
211
+ - `log_level_replica`: warning
212
+ - `log_on_each_node`: True
213
+ - `logging_nan_inf_filter`: True
214
+ - `save_safetensors`: True
215
+ - `save_on_each_node`: False
216
+ - `save_only_model`: False
217
+ - `restore_callback_states_from_checkpoint`: False
218
+ - `no_cuda`: False
219
+ - `use_cpu`: False
220
+ - `use_mps_device`: False
221
+ - `seed`: 42
222
+ - `data_seed`: None
223
+ - `jit_mode_eval`: False
224
+ - `use_ipex`: False
225
+ - `bf16`: False
226
+ - `fp16`: False
227
+ - `fp16_opt_level`: O1
228
+ - `half_precision_backend`: auto
229
+ - `bf16_full_eval`: False
230
+ - `fp16_full_eval`: False
231
+ - `tf32`: None
232
+ - `local_rank`: 0
233
+ - `ddp_backend`: None
234
+ - `tpu_num_cores`: None
235
+ - `tpu_metrics_debug`: False
236
+ - `debug`: []
237
+ - `dataloader_drop_last`: False
238
+ - `dataloader_num_workers`: 0
239
+ - `dataloader_prefetch_factor`: None
240
+ - `past_index`: -1
241
+ - `disable_tqdm`: False
242
+ - `remove_unused_columns`: True
243
+ - `label_names`: None
244
+ - `load_best_model_at_end`: False
245
+ - `ignore_data_skip`: False
246
+ - `fsdp`: []
247
+ - `fsdp_min_num_params`: 0
248
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
249
+ - `tp_size`: 0
250
+ - `fsdp_transformer_layer_cls_to_wrap`: None
251
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
252
+ - `deepspeed`: None
253
+ - `label_smoothing_factor`: 0.0
254
+ - `optim`: adamw_torch
255
+ - `optim_args`: None
256
+ - `adafactor`: False
257
+ - `group_by_length`: False
258
+ - `length_column_name`: length
259
+ - `ddp_find_unused_parameters`: None
260
+ - `ddp_bucket_cap_mb`: None
261
+ - `ddp_broadcast_buffers`: False
262
+ - `dataloader_pin_memory`: True
263
+ - `dataloader_persistent_workers`: False
264
+ - `skip_memory_metrics`: True
265
+ - `use_legacy_prediction_loop`: False
266
+ - `push_to_hub`: False
267
+ - `resume_from_checkpoint`: None
268
+ - `hub_model_id`: None
269
+ - `hub_strategy`: every_save
270
+ - `hub_private_repo`: None
271
+ - `hub_always_push`: False
272
+ - `gradient_checkpointing`: False
273
+ - `gradient_checkpointing_kwargs`: None
274
+ - `include_inputs_for_metrics`: False
275
+ - `include_for_metrics`: []
276
+ - `eval_do_concat_batches`: True
277
+ - `fp16_backend`: auto
278
+ - `push_to_hub_model_id`: None
279
+ - `push_to_hub_organization`: None
280
+ - `mp_parameters`:
281
+ - `auto_find_batch_size`: False
282
+ - `full_determinism`: False
283
+ - `torchdynamo`: None
284
+ - `ray_scope`: last
285
+ - `ddp_timeout`: 1800
286
+ - `torch_compile`: False
287
+ - `torch_compile_backend`: None
288
+ - `torch_compile_mode`: None
289
+ - `include_tokens_per_second`: False
290
+ - `include_num_input_tokens_seen`: False
291
+ - `neftune_noise_alpha`: None
292
+ - `optim_target_modules`: None
293
+ - `batch_eval_metrics`: False
294
+ - `eval_on_start`: False
295
+ - `use_liger_kernel`: False
296
+ - `eval_use_gather_object`: False
297
+ - `average_tokens_across_devices`: False
298
+ - `prompts`: None
299
+ - `batch_sampler`: batch_sampler
300
+ - `multi_dataset_batch_sampler`: proportional
301
+
302
+ </details>
303
+
304
+ ### Training Logs
305
+ | Epoch | Step | Training Loss | cross-rerank-dev-mixed-neg_ndcg@10 |
306
+ |:------:|:----:|:-------------:|:----------------------------------:|
307
+ | 0.3592 | 500 | 0.4259 | 0.5154 |
308
+ | 0.7184 | 1000 | 0.3346 | 0.5497 |
309
+ | 1.0 | 1392 | - | 0.5640 |
310
+ | 1.0776 | 1500 | 0.3171 | 0.5660 |
311
+ | 1.4368 | 2000 | 0.2826 | 0.5669 |
312
+ | 1.7960 | 2500 | 0.281 | 0.5802 |
313
+ | 2.0 | 2784 | - | 0.5834 |
314
+ | 2.1552 | 3000 | 0.2553 | 0.5842 |
315
+ | 2.5144 | 3500 | 0.2326 | 0.5961 |
316
+ | 2.8736 | 4000 | 0.2408 | 0.5971 |
317
+
318
+
319
  ### Framework Versions
320
  - Python: 3.11.12
321
  - Sentence Transformers: 4.1.0
322
  - Transformers: 4.51.3
323
  - PyTorch: 2.6.0+cu124
324
  - Accelerate: 1.6.0
325
+ - Datasets: 3.5.1
326
  - Tokenizers: 0.21.1
327
 
328
  ## Citation
329
 
330
  ### BibTeX
331
 
332
+ #### Sentence Transformers
333
+ ```bibtex
334
+ @inproceedings{reimers-2019-sentence-bert,
335
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
336
+ author = "Reimers, Nils and Gurevych, Iryna",
337
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
338
+ month = "11",
339
+ year = "2019",
340
+ publisher = "Association for Computational Linguistics",
341
+ url = "https://arxiv.org/abs/1908.10084",
342
+ }
343
+ ```
344
+
345
  <!--
346
  ## Glossary
347
 
config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "architectures": [
3
- "BertModel"
4
  ],
5
  "attention_probs_dropout_prob": 0.1,
6
  "classifier_dropout": null,
 
1
  {
2
  "architectures": [
3
+ "BertForSequenceClassification"
4
  ],
5
  "attention_probs_dropout_prob": 0.1,
6
  "classifier_dropout": null,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a610f0f065d2463387fb99e089e7317101263dc261e5cc4fda16fc8cd9503cc7
3
- size 90864192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bafddd7c2f5c1337838bd6699c67487afdd865a9e71bca20d6754a95520b0614
3
+ size 90866412