ms-marco-MiniLM-L-6-v2 Finetuned on PV211 HomeWork

This is a Cross Encoder model finetuned from cross-encoder/ms-marco-MiniLM-L6-v2 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

  • Model Type: Cross Encoder
  • Base model: cross-encoder/ms-marco-MiniLM-L6-v2
  • Maximum Sequence Length: 512 tokens
  • Number of Output Labels: 1 label
  • Language: en
  • License: apache-2.0

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the ๐Ÿค— Hub
model = CrossEncoder("maennyn/pv211_beir_cqadupstack_crossencoder2")
# Get scores for pairs of texts
pairs = [
    ['Increase the X length of a tikzpicture', "In recent years I've developed a habit of formatting SQL `SELECT` queries like so:               SELECT         fieldNames     FROM         sources         JOIN tableSource ON col1 = col2         JOIN (             SELECT                 fieldNames             FROM                 otherSources          ) AS subQuery ON subQuery.foo = col2     WHERE         someField = somePredicate      So you see my pattern: each keyword is on its own line and that keyword's fields are indented by 1 tab-stop and the pattern is used recursively for sub- queries. This works well for all of my `SELECT` queries, as it maximizes readability though at the cost of vertical space; but it doesn't work for things like `INSERT` and `UPDATE` which have radically different syntax.               INSERT INTO tableName            (  col1,   col2,   col3,   col4,   col5,   col6,   col7,   col8  )     VALUES ( 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8' ),     VALUES ( 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8' )          UPDATE tableName     SET         col1 = 'col1',         col2 = 'col2',         col3 = 'col3',         // etc     WHERE         someField = somePredicate      As you can see, they aren't as pretty, and when you're dealing with tables with a lot of columns they quickly become unweildly. Is there a better way to format `INSERT` and `UPDATE`? And what about `CREATE` statements and other operations?"],
    ['Fillable form: checkbox linked to hide/unhide sections; pushbutton to add/delete rows', "I'd like to create a LaTeX document that when rendered into PDF, has forms that can be filled out using Adobe Reader or other such programs. Then I'd like to be able to extract the data. I deliberately would like to avoid using Acrobat for all the usual reasons (non-free, need different versions for different platforms etc). Can this be done ?"],
    ['Is there any way to get something like pmatrix with customizable grid lines between cells?', "> **Possible Duplicate:**   >  Highlight elements in the matrix i have a matrix:               \\begin{equation}      \\begin{bmatrix}         1 & 5 & 4 & 2 & 1 \\\\         1 & 5 & 4 & 2 & 1 \\\\         1 & 5 & 4 & 2 & 1 \\\\     \\end{bmatrix}     \\label{e:crop1}     \\end{equation}      and i would like to draw a box around a few of the values to highlight a selection & label it, how would i go about this? I've looked at nodes but havent got a clue. thanks"],
    ["Difference between 'all' and 'all the'", 'I am not confident about my judgement as to whether or not "the" is required if a relative clause is used in a sentence.   For example, > The data can be collected on all the computers on which the software is > installed. I think it must be "all the computers " and not be "all computers" because "computers" is specified by "on which the software is installed". Please help me confirm that I am right.'],
    ['Understanding the exclamation mark (!) in bash', "I'm following through a tutorial and it mentions to run this command:               sudo chmod 700 !$      I'm not familiar with `!$`. What does it mean?"],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'Increase the X length of a tikzpicture',
    [
        "In recent years I've developed a habit of formatting SQL `SELECT` queries like so:               SELECT         fieldNames     FROM         sources         JOIN tableSource ON col1 = col2         JOIN (             SELECT                 fieldNames             FROM                 otherSources          ) AS subQuery ON subQuery.foo = col2     WHERE         someField = somePredicate      So you see my pattern: each keyword is on its own line and that keyword's fields are indented by 1 tab-stop and the pattern is used recursively for sub- queries. This works well for all of my `SELECT` queries, as it maximizes readability though at the cost of vertical space; but it doesn't work for things like `INSERT` and `UPDATE` which have radically different syntax.               INSERT INTO tableName            (  col1,   col2,   col3,   col4,   col5,   col6,   col7,   col8  )     VALUES ( 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8' ),     VALUES ( 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8' )          UPDATE tableName     SET         col1 = 'col1',         col2 = 'col2',         col3 = 'col3',         // etc     WHERE         someField = somePredicate      As you can see, they aren't as pretty, and when you're dealing with tables with a lot of columns they quickly become unweildly. Is there a better way to format `INSERT` and `UPDATE`? And what about `CREATE` statements and other operations?",
        "I'd like to create a LaTeX document that when rendered into PDF, has forms that can be filled out using Adobe Reader or other such programs. Then I'd like to be able to extract the data. I deliberately would like to avoid using Acrobat for all the usual reasons (non-free, need different versions for different platforms etc). Can this be done ?",
        "> **Possible Duplicate:**   >  Highlight elements in the matrix i have a matrix:               \\begin{equation}      \\begin{bmatrix}         1 & 5 & 4 & 2 & 1 \\\\         1 & 5 & 4 & 2 & 1 \\\\         1 & 5 & 4 & 2 & 1 \\\\     \\end{bmatrix}     \\label{e:crop1}     \\end{equation}      and i would like to draw a box around a few of the values to highlight a selection & label it, how would i go about this? I've looked at nodes but havent got a clue. thanks",
        'I am not confident about my judgement as to whether or not "the" is required if a relative clause is used in a sentence.   For example, > The data can be collected on all the computers on which the software is > installed. I think it must be "all the computers " and not be "all computers" because "computers" is specified by "on which the software is installed". Please help me confirm that I am right.',
        "I'm following through a tutorial and it mentions to run this command:               sudo chmod 700 !$      I'm not familiar with `!$`. What does it mean?",
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Correlation

Metric Value
pearson 0.8858
spearman 0.8182

Cross Encoder Reranking

  • Datasets: NanoMSMARCO_R100, NanoNFCorpus_R100 and NanoNQ_R100
  • Evaluated with CrossEncoderRerankingEvaluator with these parameters:
    {
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric NanoMSMARCO_R100 NanoNFCorpus_R100 NanoNQ_R100
map 0.6048 (+0.1152) 0.3633 (+0.1023) 0.6871 (+0.2674)
mrr@10 0.5974 (+0.1199) 0.5961 (+0.0962) 0.7117 (+0.2850)
ndcg@10 0.6644 (+0.1240) 0.4082 (+0.0832) 0.7413 (+0.2407)

Cross Encoder Nano BEIR

  • Dataset: NanoBEIR_R100_mean
  • Evaluated with CrossEncoderNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "rerank_k": 100,
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric Value
map 0.5517 (+0.1616)
mrr@10 0.6350 (+0.1670)
ndcg@10 0.6046 (+0.1493)

Training Details

Training Dataset

Unnamed Dataset

  • Size: 36,728 training samples
  • Columns: query, document, and label
  • Approximate statistics based on the first 1000 samples:
    query document label
    type string string int
    details
    • min: 15 characters
    • mean: 49.89 characters
    • max: 128 characters
    • min: 36 characters
    • mean: 718.8 characters
    • max: 17541 characters
    • 0: ~48.90%
    • 1: ~51.10%
  • Samples:
    query document label
    Increase the X length of a tikzpicture In recent years I've developed a habit of formatting SQL SELECT queries like so: SELECT fieldNames FROM sources JOIN tableSource ON col1 = col2 JOIN ( SELECT fieldNames FROM otherSources ) AS subQuery ON subQuery.foo = col2 WHERE someField = somePredicate So you see my pattern: each keyword is on its own line and that keyword's fields are indented by 1 tab-stop and the pattern is used recursively for sub- queries. This works well for all of my SELECT queries, as it maximizes readability though at the cost of vertical space; but it doesn't work for things like INSERT and UPDATE which have radically different syntax. INSERT INTO tableName ( col1, col2, col3, col4, col5, col6, col7, col8 ) VALUES ( 'col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8' ), VALUES ( 'col1', 'col2', 'col3', 'col4',... 0
    Fillable form: checkbox linked to hide/unhide sections; pushbutton to add/delete rows I'd like to create a LaTeX document that when rendered into PDF, has forms that can be filled out using Adobe Reader or other such programs. Then I'd like to be able to extract the data. I deliberately would like to avoid using Acrobat for all the usual reasons (non-free, need different versions for different platforms etc). Can this be done ? 1
    Is there any way to get something like pmatrix with customizable grid lines between cells? > Possible Duplicate: > Highlight elements in the matrix i have a matrix: \begin{equation} \begin{bmatrix} 1 & 5 & 4 & 2 & 1 \ 1 & 5 & 4 & 2 & 1 \ 1 & 5 & 4 & 2 & 1 \ \end{bmatrix} \label{e:crop1} \end{equation} and i would like to draw a box around a few of the values to highlight a selection & label it, how would i go about this? I've looked at nodes but havent got a clue. thanks 1
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • warmup_ratio: 0.1
  • save_only_model: True
  • fp16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: True
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sts_dev_spearman NanoMSMARCO_R100_ndcg@10 NanoNFCorpus_R100_ndcg@10 NanoNQ_R100_ndcg@10 NanoBEIR_R100_mean_ndcg@10
-1 -1 - 0.7222 0.6686 (+0.1282) 0.3930 (+0.0680) 0.7599 (+0.2592) 0.6072 (+0.1518)
0.4355 1000 0.4163 - - - - -
0.8711 2000 0.1632 - - - - -
1.0 2296 - 0.8182 0.6644 (+0.1240) 0.4082 (+0.0832) 0.7413 (+0.2407) 0.6046 (+0.1493)
1.3066 3000 0.1227 - - - - -
1.7422 4000 0.1157 - - - - -
2.0 4592 - 0.8201 0.6266 (+0.0862) 0.4096 (+0.0846) 0.7032 (+0.2026) 0.5798 (+0.1244)
2.1777 5000 0.0964 - - - - -
2.6132 6000 0.081 - - - - -
3.0 6888 - 0.8203 0.6241 (+0.0837) 0.4068 (+0.0817) 0.6931 (+0.1924) 0.5747 (+0.1193)
-1 -1 - 0.8182 0.6644 (+0.1240) 0.4082 (+0.0832) 0.7413 (+0.2407) 0.6046 (+0.1493)
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.8.0.dev20250319+cu128
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
2
Safetensors
Model size
22.7M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for maennyn/pv211_beir_cqadupstack_crossencoder2

Evaluation results