splade-distilbert-base-uncased trained on GooAQ

This is a SPLADE Sparse Encoder model finetuned from distilbert/distilbert-base-uncased on the gooaq dataset using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: distilbert/distilbert-base-uncased
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'DistilBertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-gooaq")
# Run inference
queries = [
    "how many days for doxycycline to work on sinus infection?",
]
documents = [
    'Treatment of suspected bacterial infection is with antibiotics, such as amoxicillin/clavulanate or doxycycline, given for 5 to 7 days for acute sinusitis and for up to 6 weeks for chronic sinusitis.',
    'Most engagements typically have a cocktail dress code, calling for dresses at, or slightly above, knee-length and high heels. If your party states a different dress code, however, such as semi-formal or dressy-casual, you may need to dress up or down accordingly.',
    'The average service life of a gas furnace is about 15 years, but the actual life span of an individual unit can vary greatly. There are a number of contributing factors that determine the age a furnace reaches: The quality of the equipment.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[103.7028,  26.2666,  35.3421]])

Evaluation

Metrics

Sparse Information Retrieval

  • Datasets: NanoMSMARCO, NanoNFCorpus, NanoNQ, NanoClimateFEVER, NanoDBPedia, NanoFEVER, NanoFiQA2018, NanoHotpotQA, NanoMSMARCO, NanoNFCorpus, NanoNQ, NanoQuoraRetrieval, NanoSCIDOCS, NanoArguAna, NanoSciFact and NanoTouche2020
  • Evaluated with SparseInformationRetrievalEvaluator
Metric NanoMSMARCO NanoNFCorpus NanoNQ NanoClimateFEVER NanoDBPedia NanoFEVER NanoFiQA2018 NanoHotpotQA NanoQuoraRetrieval NanoSCIDOCS NanoArguAna NanoSciFact NanoTouche2020
dot_accuracy@1 0.28 0.24 0.36 0.24 0.6 0.54 0.32 0.72 0.5 0.36 0.06 0.5 0.5714
dot_accuracy@3 0.56 0.46 0.6 0.44 0.78 0.78 0.48 0.78 0.74 0.54 0.38 0.58 0.8367
dot_accuracy@5 0.62 0.5 0.68 0.52 0.84 0.9 0.52 0.8 0.84 0.68 0.44 0.64 0.8776
dot_accuracy@10 0.72 0.58 0.76 0.62 0.9 0.9 0.62 0.92 0.96 0.76 0.5 0.7 0.9388
dot_precision@1 0.28 0.24 0.36 0.24 0.6 0.54 0.32 0.72 0.5 0.36 0.06 0.5 0.5714
dot_precision@3 0.1867 0.2933 0.2067 0.1533 0.4867 0.26 0.2133 0.4133 0.26 0.2467 0.1267 0.2067 0.5238
dot_precision@5 0.124 0.252 0.14 0.112 0.448 0.18 0.156 0.26 0.188 0.212 0.088 0.136 0.502
dot_precision@10 0.072 0.214 0.08 0.074 0.388 0.094 0.102 0.152 0.118 0.152 0.05 0.08 0.4388
dot_recall@1 0.28 0.0078 0.35 0.115 0.075 0.5167 0.1822 0.36 0.49 0.0757 0.06 0.465 0.0405
dot_recall@3 0.56 0.0392 0.58 0.2057 0.143 0.7267 0.2922 0.62 0.7067 0.1527 0.38 0.545 0.1132
dot_recall@5 0.62 0.066 0.65 0.254 0.1796 0.8367 0.3451 0.65 0.8013 0.2187 0.44 0.605 0.1732
dot_recall@10 0.72 0.0853 0.72 0.303 0.2627 0.8567 0.4614 0.76 0.9133 0.3137 0.5 0.69 0.2875
dot_ndcg@10 0.489 0.2395 0.5442 0.2509 0.4899 0.7043 0.3639 0.6876 0.7142 0.2977 0.2921 0.5783 0.4859
dot_mrr@10 0.416 0.3644 0.4959 0.3528 0.7127 0.6723 0.4092 0.7661 0.6458 0.4796 0.2237 0.5562 0.7082
dot_map@100 0.4301 0.0903 0.4945 0.1923 0.3807 0.6526 0.3017 0.6246 0.6499 0.2173 0.2347 0.5447 0.3646
query_active_dims 111.46 156.66 103.9 240.48 159.22 211.28 103.12 132.78 63.4 247.56 477.06 280.32 61.8367
query_sparsity_ratio 0.9963 0.9949 0.9966 0.9921 0.9948 0.9931 0.9966 0.9956 0.9979 0.9919 0.9844 0.9908 0.998
corpus_active_dims 310.8414 505.3576 356.2113 398.2761 347.9973 428.2852 340.6042 392.0682 73.4578 424.1747 455.6429 451.0737 380.6023
corpus_sparsity_ratio 0.9898 0.9834 0.9883 0.987 0.9886 0.986 0.9888 0.9872 0.9976 0.9861 0.9851 0.9852 0.9875

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ]
    }
    
Metric Value
dot_accuracy@1 0.2733
dot_accuracy@3 0.5
dot_accuracy@5 0.6
dot_accuracy@10 0.7067
dot_precision@1 0.2733
dot_precision@3 0.2111
dot_precision@5 0.1707
dot_precision@10 0.1247
dot_recall@1 0.1668
dot_recall@3 0.339
dot_recall@5 0.4169
dot_recall@10 0.5139
dot_ndcg@10 0.4032
dot_mrr@10 0.4111
dot_map@100 0.3021
query_active_dims 141.7933
query_sparsity_ratio 0.9954
corpus_active_dims 381.7903
corpus_sparsity_ratio 0.9875

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "climatefever",
            "dbpedia",
            "fever",
            "fiqa2018",
            "hotpotqa",
            "msmarco",
            "nfcorpus",
            "nq",
            "quoraretrieval",
            "scidocs",
            "arguana",
            "scifact",
            "touche2020"
        ]
    }
    
Metric Value
dot_accuracy@1 0.407
dot_accuracy@3 0.6121
dot_accuracy@5 0.6814
dot_accuracy@10 0.7599
dot_precision@1 0.407
dot_precision@3 0.2752
dot_precision@5 0.2152
dot_precision@10 0.155
dot_recall@1 0.2321
dot_recall@3 0.3896
dot_recall@5 0.4492
dot_recall@10 0.5287
dot_ndcg@10 0.4721
dot_mrr@10 0.5233
dot_map@100 0.3983
query_active_dims 180.8814
query_sparsity_ratio 0.9941
corpus_active_dims 360.7381
corpus_sparsity_ratio 0.9882

Training Details

Training Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 99,000 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.79 tokens
    • max: 24 tokens
    • min: 14 tokens
    • mean: 60.02 tokens
    • max: 153 tokens
  • Samples:
    question answer
    what are the 5 characteristics of a star? Key Concept: Characteristics used to classify stars include color, temperature, size, composition, and brightness.
    are copic markers alcohol ink? Copic Ink is alcohol-based and flammable. Keep away from direct sunlight and extreme temperatures.
    what is the difference between appellate term and appellate division? Appellate terms An appellate term is an intermediate appellate court that hears appeals from the inferior courts within their designated counties or judicial districts, and are intended to ease the workload on the Appellate Division and provide a less expensive forum closer to the people.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 3e-05,
        "query_regularizer_weight": 5e-05
    }
    

Evaluation Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 1,000 evaluation samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.93 tokens
    • max: 25 tokens
    • min: 14 tokens
    • mean: 60.84 tokens
    • max: 127 tokens
  • Samples:
    question answer
    should you take ibuprofen with high blood pressure? In general, people with high blood pressure should use acetaminophen or possibly aspirin for over-the-counter pain relief. Unless your health care provider has said it's OK, you should not use ibuprofen, ketoprofen, or naproxen sodium. If aspirin or acetaminophen doesn't help with your pain, call your doctor.
    how old do you have to be to work in sc? The general minimum age of employment for South Carolina youth is 14, although the state allows younger children who are performers to work in show business. If their families are agricultural workers, children younger than age 14 may also participate in farm labor.
    how to write a topic proposal for a research paper? ['Write down the main topic of your paper. ... ', 'Write two or three short sentences under the main topic that explain why you chose that topic. ... ', 'Write a thesis sentence that states the angle and purpose of your research paper. ... ', 'List the items you will cover in the body of the paper that support your thesis statement.']
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 3e-05,
        "query_regularizer_weight": 5e-05
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_dot_ndcg@10 NanoNFCorpus_dot_ndcg@10 NanoNQ_dot_ndcg@10 NanoBEIR_mean_dot_ndcg@10 NanoClimateFEVER_dot_ndcg@10 NanoDBPedia_dot_ndcg@10 NanoFEVER_dot_ndcg@10 NanoFiQA2018_dot_ndcg@10 NanoHotpotQA_dot_ndcg@10 NanoQuoraRetrieval_dot_ndcg@10 NanoSCIDOCS_dot_ndcg@10 NanoArguAna_dot_ndcg@10 NanoSciFact_dot_ndcg@10 NanoTouche2020_dot_ndcg@10
0.0323 100 11.4443 - - - - - - - - - - - - - - -
0.0646 200 0.2676 - - - - - - - - - - - - - - -
0.0970 300 0.1639 - - - - - - - - - - - - - - -
0.1293 400 0.1769 - - - - - - - - - - - - - - -
0.1616 500 0.1593 - - - - - - - - - - - - - - -
0.1939 600 0.1194 - - - - - - - - - - - - - - -
0.1972 610 - 0.1080 0.4260 0.2314 0.4303 0.3626 - - - - - - - - - -
0.2262 700 0.1351 - - - - - - - - - - - - - - -
0.2586 800 0.109 - - - - - - - - - - - - - - -
0.2909 900 0.1147 - - - - - - - - - - - - - - -
0.3232 1000 0.0994 - - - - - - - - - - - - - - -
0.3555 1100 0.0871 - - - - - - - - - - - - - - -
0.3878 1200 0.0891 - - - - - - - - - - - - - - -
0.3943 1220 - 0.0942 0.489 0.2395 0.5442 0.4242 - - - - - - - - - -
0.4202 1300 0.09 - - - - - - - - - - - - - - -
0.4525 1400 0.0902 - - - - - - - - - - - - - - -
0.4848 1500 0.1046 - - - - - - - - - - - - - - -
0.5171 1600 0.071 - - - - - - - - - - - - - - -
0.5495 1700 0.0783 - - - - - - - - - - - - - - -
0.5818 1800 0.0846 - - - - - - - - - - - - - - -
0.5915 1830 - 0.0804 0.4745 0.2537 0.4780 0.4021 - - - - - - - - - -
0.6141 1900 0.0572 - - - - - - - - - - - - - - -
0.6464 2000 0.0712 - - - - - - - - - - - - - - -
0.6787 2100 0.065 - - - - - - - - - - - - - - -
0.7111 2200 0.096 - - - - - - - - - - - - - - -
0.7434 2300 0.0764 - - - - - - - - - - - - - - -
0.7757 2400 0.0722 - - - - - - - - - - - - - - -
0.7886 2440 - 0.0716 0.4976 0.2348 0.4626 0.3983 - - - - - - - - - -
0.8080 2500 0.0579 - - - - - - - - - - - - - - -
0.8403 2600 0.0655 - - - - - - - - - - - - - - -
0.8727 2700 0.0612 - - - - - - - - - - - - - - -
0.9050 2800 0.0491 - - - - - - - - - - - - - - -
0.9373 2900 0.0496 - - - - - - - - - - - - - - -
0.9696 3000 0.0553 - - - - - - - - - - - - - - -
0.9858 3050 - 0.0746 0.4990 0.2419 0.4688 0.4032 - - - - - - - - - -
-1 -1 - - 0.4890 0.2395 0.5442 0.4721 0.2509 0.4899 0.7043 0.3639 0.6876 0.7142 0.2977 0.2921 0.5783 0.4859
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.039 kWh
  • Carbon Emitted: 0.015 kg of CO2
  • Hours Used: 0.154 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 4.2.0.dev0
  • Transformers: 4.52.4
  • PyTorch: 2.7.1+cu126
  • Accelerate: 1.5.1
  • Datasets: 2.21.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
851
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/splade-distilbert-base-uncased-gooaq

Finetuned
(8931)
this model

Dataset used to train tomaarsen/splade-distilbert-base-uncased-gooaq

Evaluation results