splade-distilbert-base-uncased trained on GooAQ

This is a SPLADE Sparse Encoder model finetuned from distilbert/distilbert-base-uncased on the gooaq dataset using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: distilbert/distilbert-base-uncased
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'DistilBertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-gooaq-peft-r128")
# Run inference
queries = [
    "how many days for doxycycline to work on sinus infection?",
]
documents = [
    'Treatment of suspected bacterial infection is with antibiotics, such as amoxicillin/clavulanate or doxycycline, given for 5 to 7 days for acute sinusitis and for up to 6 weeks for chronic sinusitis.',
    'Most engagements typically have a cocktail dress code, calling for dresses at, or slightly above, knee-length and high heels. If your party states a different dress code, however, such as semi-formal or dressy-casual, you may need to dress up or down accordingly.',
    'The average service life of a gas furnace is about 15 years, but the actual life span of an individual unit can vary greatly. There are a number of contributing factors that determine the age a furnace reaches: The quality of the equipment.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[85.3246, 22.8328, 29.6908]])

Evaluation

Metrics

Sparse Information Retrieval

  • Datasets: NanoMSMARCO, NanoNFCorpus, NanoNQ, NanoClimateFEVER, NanoDBPedia, NanoFEVER, NanoFiQA2018, NanoHotpotQA, NanoMSMARCO, NanoNFCorpus, NanoNQ, NanoQuoraRetrieval, NanoSCIDOCS, NanoArguAna, NanoSciFact and NanoTouche2020
  • Evaluated with SparseInformationRetrievalEvaluator
Metric NanoMSMARCO NanoNFCorpus NanoNQ NanoClimateFEVER NanoDBPedia NanoFEVER NanoFiQA2018 NanoHotpotQA NanoQuoraRetrieval NanoSCIDOCS NanoArguAna NanoSciFact NanoTouche2020
dot_accuracy@1 0.32 0.3 0.24 0.2 0.62 0.44 0.22 0.68 0.36 0.28 0.02 0.36 0.5306
dot_accuracy@3 0.5 0.46 0.46 0.38 0.78 0.66 0.42 0.78 0.52 0.52 0.14 0.56 0.7959
dot_accuracy@5 0.64 0.48 0.54 0.42 0.86 0.78 0.46 0.8 0.58 0.62 0.22 0.6 0.9184
dot_accuracy@10 0.72 0.6 0.72 0.52 0.92 0.84 0.54 0.88 0.78 0.78 0.38 0.66 0.9592
dot_precision@1 0.32 0.3 0.24 0.2 0.62 0.44 0.22 0.68 0.36 0.28 0.02 0.36 0.5306
dot_precision@3 0.1667 0.2733 0.1533 0.14 0.4733 0.22 0.1667 0.3533 0.1733 0.2067 0.0467 0.2067 0.551
dot_precision@5 0.128 0.232 0.108 0.092 0.452 0.156 0.144 0.24 0.124 0.184 0.044 0.132 0.5102
dot_precision@10 0.072 0.214 0.074 0.06 0.396 0.086 0.09 0.138 0.082 0.138 0.038 0.078 0.4122
dot_recall@1 0.32 0.0107 0.23 0.0883 0.0677 0.44 0.1393 0.34 0.3467 0.0597 0.02 0.335 0.0349
dot_recall@3 0.5 0.0404 0.44 0.1817 0.142 0.64 0.2604 0.53 0.4707 0.1287 0.14 0.535 0.1078
dot_recall@5 0.64 0.0582 0.51 0.1923 0.1928 0.7367 0.3118 0.6 0.5507 0.1897 0.22 0.575 0.1671
dot_recall@10 0.72 0.085 0.67 0.2523 0.2816 0.7967 0.3924 0.69 0.7507 0.2837 0.38 0.66 0.2671
dot_ndcg@10 0.5061 0.2416 0.4431 0.2097 0.4999 0.6191 0.3072 0.6198 0.5326 0.2575 0.1746 0.5065 0.4629
dot_mrr@10 0.4382 0.3932 0.3818 0.299 0.7169 0.5679 0.3309 0.7347 0.4783 0.4254 0.1124 0.4627 0.6864
dot_map@100 0.4501 0.0842 0.3762 0.168 0.3705 0.5645 0.251 0.5405 0.4735 0.1769 0.1156 0.4604 0.3432
query_active_dims 105.08 150.78 97.18 250.86 146.02 253.38 85.7 152.54 52.9 197.2 732.46 276.2 37.8163
query_sparsity_ratio 0.9966 0.9951 0.9968 0.9918 0.9952 0.9917 0.9972 0.995 0.9983 0.9935 0.976 0.991 0.9988
corpus_active_dims 381.3875 807.0742 564.0423 643.3269 481.7581 749.9185 416.9383 553.4067 61.3555 676.0037 648.4751 729.4652 493.4804
corpus_sparsity_ratio 0.9875 0.9736 0.9815 0.9789 0.9842 0.9754 0.9863 0.9819 0.998 0.9779 0.9788 0.9761 0.9838

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ]
    }
    
Metric Value
dot_accuracy@1 0.3067
dot_accuracy@3 0.4933
dot_accuracy@5 0.58
dot_accuracy@10 0.6733
dot_precision@1 0.3067
dot_precision@3 0.2022
dot_precision@5 0.1613
dot_precision@10 0.1133
dot_recall@1 0.194
dot_recall@3 0.3495
dot_recall@5 0.42
dot_recall@10 0.4831
dot_ndcg@10 0.3946
dot_mrr@10 0.414
dot_map@100 0.3065
query_active_dims 136.6333
query_sparsity_ratio 0.9955
corpus_active_dims 565.1
corpus_sparsity_ratio 0.9815

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "climatefever",
            "dbpedia",
            "fever",
            "fiqa2018",
            "hotpotqa",
            "msmarco",
            "nfcorpus",
            "nq",
            "quoraretrieval",
            "scidocs",
            "arguana",
            "scifact",
            "touche2020"
        ]
    }
    
Metric Value
dot_accuracy@1 0.3516
dot_accuracy@3 0.5366
dot_accuracy@5 0.6091
dot_accuracy@10 0.7153
dot_precision@1 0.3516
dot_precision@3 0.2408
dot_precision@5 0.1959
dot_precision@10 0.1445
dot_recall@1 0.1871
dot_recall@3 0.3167
dot_recall@5 0.3803
dot_recall@10 0.4792
dot_ndcg@10 0.4139
dot_mrr@10 0.4637
dot_map@100 0.3365
query_active_dims 195.4823
query_sparsity_ratio 0.9936
corpus_active_dims 525.5023
corpus_sparsity_ratio 0.9828

Training Details

Training Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 99,000 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.79 tokens
    • max: 24 tokens
    • min: 14 tokens
    • mean: 60.02 tokens
    • max: 153 tokens
  • Samples:
    question answer
    what are the 5 characteristics of a star? Key Concept: Characteristics used to classify stars include color, temperature, size, composition, and brightness.
    are copic markers alcohol ink? Copic Ink is alcohol-based and flammable. Keep away from direct sunlight and extreme temperatures.
    what is the difference between appellate term and appellate division? Appellate terms An appellate term is an intermediate appellate court that hears appeals from the inferior courts within their designated counties or judicial districts, and are intended to ease the workload on the Appellate Division and provide a less expensive forum closer to the people.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 3e-05,
        "query_regularizer_weight": 5e-05
    }
    

Evaluation Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 1,000 evaluation samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.93 tokens
    • max: 25 tokens
    • min: 14 tokens
    • mean: 60.84 tokens
    • max: 127 tokens
  • Samples:
    question answer
    should you take ibuprofen with high blood pressure? In general, people with high blood pressure should use acetaminophen or possibly aspirin for over-the-counter pain relief. Unless your health care provider has said it's OK, you should not use ibuprofen, ketoprofen, or naproxen sodium. If aspirin or acetaminophen doesn't help with your pain, call your doctor.
    how old do you have to be to work in sc? The general minimum age of employment for South Carolina youth is 14, although the state allows younger children who are performers to work in show business. If their families are agricultural workers, children younger than age 14 may also participate in farm labor.
    how to write a topic proposal for a research paper? ['Write down the main topic of your paper. ... ', 'Write two or three short sentences under the main topic that explain why you chose that topic. ... ', 'Write a thesis sentence that states the angle and purpose of your research paper. ... ', 'List the items you will cover in the body of the paper that support your thesis statement.']
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 3e-05,
        "query_regularizer_weight": 5e-05
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_dot_ndcg@10 NanoNFCorpus_dot_ndcg@10 NanoNQ_dot_ndcg@10 NanoBEIR_mean_dot_ndcg@10 NanoClimateFEVER_dot_ndcg@10 NanoDBPedia_dot_ndcg@10 NanoFEVER_dot_ndcg@10 NanoFiQA2018_dot_ndcg@10 NanoHotpotQA_dot_ndcg@10 NanoQuoraRetrieval_dot_ndcg@10 NanoSCIDOCS_dot_ndcg@10 NanoArguAna_dot_ndcg@10 NanoSciFact_dot_ndcg@10 NanoTouche2020_dot_ndcg@10
0.0323 100 81.7292 - - - - - - - - - - - - - - -
0.0646 200 4.3059 - - - - - - - - - - - - - - -
0.0970 300 0.8078 - - - - - - - - - - - - - - -
0.1293 400 0.4309 - - - - - - - - - - - - - - -
0.1616 500 0.3837 - - - - - - - - - - - - - - -
0.1939 600 0.282 - - - - - - - - - - - - - - -
0.1972 610 - 0.1867 0.4508 0.2059 0.3905 0.3491 - - - - - - - - - -
0.2262 700 0.2593 - - - - - - - - - - - - - - -
0.2586 800 0.2161 - - - - - - - - - - - - - - -
0.2909 900 0.2 - - - - - - - - - - - - - - -
0.3232 1000 0.2259 - - - - - - - - - - - - - - -
0.3555 1100 0.2161 - - - - - - - - - - - - - - -
0.3878 1200 0.1835 - - - - - - - - - - - - - - -
0.3943 1220 - 0.1368 0.4567 0.2373 0.4209 0.3717 - - - - - - - - - -
0.4202 1300 0.1936 - - - - - - - - - - - - - - -
0.4525 1400 0.1689 - - - - - - - - - - - - - - -
0.4848 1500 0.1858 - - - - - - - - - - - - - - -
0.5171 1600 0.1639 - - - - - - - - - - - - - - -
0.5495 1700 0.1376 - - - - - - - - - - - - - - -
0.5818 1800 0.1677 - - - - - - - - - - - - - - -
0.5915 1830 - 0.1138 0.5061 0.2416 0.4431 0.3969 - - - - - - - - - -
0.6141 1900 0.1483 - - - - - - - - - - - - - - -
0.6464 2000 0.1513 - - - - - - - - - - - - - - -
0.6787 2100 0.1449 - - - - - - - - - - - - - - -
0.7111 2200 0.193 - - - - - - - - - - - - - - -
0.7434 2300 0.1554 - - - - - - - - - - - - - - -
0.7757 2400 0.1372 - - - - - - - - - - - - - - -
0.7886 2440 - 0.1148 0.5084 0.2240 0.4428 0.3917 - - - - - - - - - -
0.8080 2500 0.1308 - - - - - - - - - - - - - - -
0.8403 2600 0.1284 - - - - - - - - - - - - - - -
0.8727 2700 0.1309 - - - - - - - - - - - - - - -
0.9050 2800 0.1458 - - - - - - - - - - - - - - -
0.9373 2900 0.1351 - - - - - - - - - - - - - - -
0.9696 3000 0.1135 - - - - - - - - - - - - - - -
0.9858 3050 - 0.1068 0.5062 0.2238 0.4539 0.3946 - - - - - - - - - -
-1 -1 - - 0.5061 0.2416 0.4431 0.4139 0.2097 0.4999 0.6191 0.3072 0.6198 0.5326 0.2575 0.1746 0.5065 0.4629
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.043 kWh
  • Carbon Emitted: 0.017 kg of CO2
  • Hours Used: 0.193 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 4.2.0.dev0
  • Transformers: 4.52.4
  • PyTorch: 2.7.1+cu126
  • Accelerate: 1.5.1
  • Datasets: 2.21.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/splade-distilbert-base-uncased-gooaq-peft-r128

Finetuned
(9500)
this model

Dataset used to train tomaarsen/splade-distilbert-base-uncased-gooaq-peft-r128

Evaluation results