yosefw's picture
Add new SparseEncoder model
20ab96e verified
metadata
language:
  - en
license: mit
tags:
  - sentence-transformers
  - sparse-encoder
  - sparse
  - splade
  - generated_from_trainer
  - dataset_size:496123
  - loss:SpladeLoss
  - loss:SparseMultipleNegativesRankingLoss
  - loss:FlopsLoss
base_model: prajjwal1/bert-medium
widget:
  - text: >-
      What is the name, background and ethnicity of the actress who plays Raj’s
      sister Priya on “The Big Bang Theory”? —Charles Dix, Stewartsville, Mo.
      Aarti Mann, 36, a first-generation Indian American, was born in
      Connecticut and raised in Pennsylvania, and plays Priya Koothrappali on
      “The Big Bang Theory.”. Of landing the role as Raj’s sister, she says, “It
      is like winning the opportunity to go to the acting Olympics.
  - text: >-
      Resolved Question: Severe pain in right side of hip radiating down leg and
      into foot. It hurts to stand, walk, sit or lie down. I've had it for
      several weeks & have used heat, ice, muscle rub-ons & patches.
  - text: >-
      The Antarctic Treaty. The 12 nations listed in the preamble (below) signed
      the Antarctic Treaty on 1 December 1959 at Washington, D.C. The Treaty
      entered into force on 23 June 1961; the 12 signatories became the original
      12 consultative nations.nother 21 nations have acceded to the Antarctic
      Treaty: Austria, Belarus, Canada, Colombia, Cuba, Democratic Peoples
      Republic of Korea, Denmark, Estonia, Greece, Guatemala, Hungary, Malaysia,
      Monaco, Pakistan, Papua New Guinea, Portugal, Romania, Slovak Republic,
      Switzerland, Turkey, and Venezuela.
  - text: >-
      Orlando, Florida, USA — Sunrise, Sunset, and Daylength, May 2017. May 2017
      — Sun in Orlando.
  - text: >-
      Line baking dish ... to also cover roast). Place roast ... the roast.
      Place in preheated 300 degree oven for 2 1/2 to 3 hours. About 50 minutes
      per pound.rim all excess fat from roast. Place potatoes ... Crockery Pot
      on top of potatoes and onions. Cover and cook on low setting for 10 to 12
      hours (high 5 to 6).
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
  - query_active_dims
  - query_sparsity_ratio
  - corpus_active_dims
  - corpus_sparsity_ratio
model-index:
  - name: SPLADE-BERT-Medium
    results:
      - task:
          type: sparse-information-retrieval
          name: Sparse Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: dot_accuracy@1
            value: 0.4716
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 0.7802
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 0.8684
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 0.9396
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.4716
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.26713333333333333
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.18059999999999998
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09851999999999998
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.4563333333333333
            name: Dot Recall@1
          - type: dot_recall@3
            value: 0.7666333333333334
            name: Dot Recall@3
          - type: dot_recall@5
            value: 0.8592166666666667
            name: Dot Recall@5
          - type: dot_recall@10
            value: 0.9338666666666667
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.7088774640922301
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.6397524603174632
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.6359976077086615
            name: Dot Map@100
          - type: query_active_dims
            value: 23.28499984741211
            name: Query Active Dims
          - type: query_sparsity_ratio
            value: 0.9992371076650478
            name: Query Sparsity Ratio
          - type: corpus_active_dims
            value: 175.6306999586799
            name: Corpus Active Dims
          - type: corpus_sparsity_ratio
            value: 0.9942457669891004
            name: Corpus Sparsity Ratio

SPLADE-BERT-Medium

This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-medium using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-medium
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Language: en
  • License: mit

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("yosefw/SPLADE-BERT-Medium-BS384")
# Run inference
queries = [
    "how long to bake arm roast",
]
documents = [
    'Line baking dish ... to also cover roast). Place roast ... the roast. Place in preheated 300 degree oven for 2 1/2 to 3 hours. About 50 minutes per pound.rim all excess fat from roast. Place potatoes ... Crockery Pot on top of potatoes and onions. Cover and cook on low setting for 10 to 12 hours (high 5 to 6).',
    'Considerations. The total time it takes to cook an arm roast depends on its size. A 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low.Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid.Most importantly, resist the temptation to lift the lid while your roast is cooking. 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low. Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid. Most importantly, resist the temptation to lift the lid while your roast is cooking.',
    'Set your Crock Pot on high to reach a simmer point of 209 degrees F in 3 to 4 hours, or low to reach the same cooking temperature in 7 to 8 hours. The total time it takes to cook an arm roast depends on its size. A 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low.Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid.Most importantly, resist the temptation to lift the lid while your roast is cooking. 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low. Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid. Most importantly, resist the temptation to lift the lid while your roast is cooking.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[16.1861, 15.3382, 15.6794]])

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.4716
dot_accuracy@3 0.7802
dot_accuracy@5 0.8684
dot_accuracy@10 0.9396
dot_precision@1 0.4716
dot_precision@3 0.2671
dot_precision@5 0.1806
dot_precision@10 0.0985
dot_recall@1 0.4563
dot_recall@3 0.7666
dot_recall@5 0.8592
dot_recall@10 0.9339
dot_ndcg@10 0.7089
dot_mrr@10 0.6398
dot_map@100 0.636
query_active_dims 23.285
query_sparsity_ratio 0.9992
corpus_active_dims 175.6307
corpus_sparsity_ratio 0.9942

Training Details

Training Dataset

Unnamed Dataset

  • Size: 496,123 training samples
  • Columns: query, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2
    type string string string string
    details
    • min: 4 tokens
    • mean: 8.87 tokens
    • max: 43 tokens
    • min: 24 tokens
    • mean: 81.23 tokens
    • max: 259 tokens
    • min: 20 tokens
    • mean: 79.21 tokens
    • max: 197 tokens
    • min: 20 tokens
    • mean: 77.89 tokens
    • max: 207 tokens
  • Samples:
    query positive negative_1 negative_2
    heart specialists in ridgeland ms Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine. Dr. James Kramer is a Internist in Ridgeland, MS. Find Dr. Kramer's phone number, address and more. Dr. James Kramer is an internist in Ridgeland, Mississippi. He received his medical degree from Loma Linda University School of Medicine and has been in practice for more than 20 years. Dr. James Kramer's Details
    does baytril otic require a prescription Baytril Otic Ear Drops-Enrofloxacin/Silver Sulfadiazine-Prices & Information. A prescription is required for this item. A prescription is required for this item. Brand medication is not available at this time. RX required for this item. Click here for our full Prescription Policy and Form. Baytril Otic (enrofloxacin/silver sulfadiazine) Emulsion from Bayer is the first fluoroquinolone approved by the Food and Drug Administration for the topical treatment of canine otitis externa. Product Details. Baytril Otic is a highly effective treatment prescribed by many veterinarians when your pet has an ear infection caused by susceptible bacteria or fungus. Baytril Otic is: a liquid emulsion that is used topically directly in the ear or on the skin in order to treat susceptible bacterial and yeast infections.
    what is on a gyro Report Abuse. Gyros or gyro (giros) (pronounced /ˈjɪəroʊ/ or /ˈdʒaɪroʊ/, Greek: γύρος turn) is a Greek dish consisting of meat (typically lamb and/or beef), tomato, onion, and tzatziki sauce, and is served with pita bread. Chicken and pork meat can be used too. A gyroscope (from Ancient Greek γῦρος gûros, circle and σκοπέω skopéō, to look) is a spinning wheel or disc in which the axis of rotation is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum. Diagram of a gyro wheel. Reaction arrows about the output axis (blue) correspond to forces applied about the input axis (green), and vice versa. A gyroscope is a wheel mounted in two or three gimbals, which are a pivoted supports that allow the rotation of the wheel about a single axis.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score', gather_across_devices=False)",
        "document_regularizer_weight": 0.003,
        "query_regularizer_weight": 0.005
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 48
  • per_device_eval_batch_size: 48
  • gradient_accumulation_steps: 8
  • learning_rate: 8e-05
  • num_train_epochs: 8
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.025
  • fp16: True
  • load_best_model_at_end: True
  • push_to_hub: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 48
  • per_device_eval_batch_size: 48
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 8e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.025
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 1292 42.0325 0.7155
2.0 2584 1.1261 0.7216
3.0 3876 1.049 0.7214
4.0 5168 0.9631 0.7188
5.0 6460 0.8725 0.7120
-1 -1 - 0.7089

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.55.4
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.1
  • Datasets: 4.0.0
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}