SPLADE-BERT-Tiny

This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-tiny using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Language: en
  • License: mit

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("yosefw/SPLADE-BERT-Tiny-BS256")
# Run inference
queries = [
    "what is funded depreciation",
]
documents = [
    "Depreciation Defined. Depreciating an asset means allocating the asset's cost over its useful life. The useful life of an asset, or economic resource, is the period of time over which a company intends to use this asset in operating activities or manufacturing processes. Funded Depreciation Defined. Funded depreciation is a fixed asset management method that helps a company set aside funds to renew machinery and equipment that it uses in operating activities. For instance, a company buys a new truck valued at $100,000 and records $10,000 in annual depreciation expense over 10 years.",
    'Funding Depreciation – Make Your Business More Profitable. Charles Hall is a CPA who heads up an excellent blog by the name of CPA-Scribo. His mission is to assist small- to medium-sized CPA firms with accounting, auditing, fraud and technology issues. We here at Depreciation Guru think he does an excellent job! In a recent post he tackled the topic of funding depreciation.',
    'Ratings: 6.1 /10 from 3,128 users. Reviews: 30 user | 2 critic. Liv, a popular television star whose show has just finished its run, and Maddie, an outstanding student and school basketball star whose popularity is on the rise until Liv makes a return to their high school.John D. Beck, Ron Hart.iv and Maddie is surprisingly good. I was expecting an OK show that was somewhat entertaining, instead it is a show that has charm, great comedic timing and is pretty darn adorable. The lead actress who plays both the twins (Dove Cameron) is extremely talented and also has a beautiful voice.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[15.4318, 10.7445,  0.0000]])

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.469
dot_accuracy@3 0.7692
dot_accuracy@5 0.8726
dot_accuracy@10 0.9426
dot_precision@1 0.469
dot_precision@3 0.2635
dot_precision@5 0.1816
dot_precision@10 0.0989
dot_recall@1 0.4528
dot_recall@3 0.7551
dot_recall@5 0.8631
dot_recall@10 0.9369
dot_ndcg@10 0.7049
dot_mrr@10 0.6341
dot_map@100 0.6295
query_active_dims 19.5342
query_sparsity_ratio 0.9994
corpus_active_dims 153.2182
corpus_sparsity_ratio 0.995

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,056,494 training samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.09 tokens
    • max: 24 tokens
    • min: 15 tokens
    • mean: 79.39 tokens
    • max: 229 tokens
    • min: 13 tokens
    • mean: 78.0 tokens
    • max: 207 tokens
  • Samples:
    query positive negative
    what is the arg For each point on the plane, arg is the function which returns the angle φ. In mathematics, arg is a function operating on complex numbers (visualized in a complex plane). It gives the angle between the positive real axis to the line joining the point to the origin, shown as φ in figure 1, known as an argument of the point. Argument (complex analysis) In mathematics, arg is a function operating on complex numbers (visualized in a complex plane). It gives the angle between the positive real axis to the line joining the point to the origin, shown as φ in figure 1, known as an argument of the point.
    symptoms of disc herniation Other symptoms of a herniated disc include severe deep muscle pain and muscle spasms. This answer should not be considered medical advice...This answer should not be considered medical advice and should not take the place of a doctor’s visit. 1 If the disc herniation is large enough, the disc tissue can press on the adjacent spinal nerves that exit the spine at the level of the disc herniation. The physical examination, imaging tests, and electrical tests can aid in the diagnosis of a herniated disc.
    which of the following is found in the dorsal body city From Wikipedia, the free encyclopedia. Human body cavities: Dorsal body cavity is to the left. The dorsal body cavity is located along the dorsal (posterior) surface of the human body, where it is subdivided into the cranial cavity housing the brain and the spinal cavity housing the spinal cord.The two cavities are continuous with one another.rom Wikipedia, the free encyclopedia. Human body cavities: Dorsal body cavity is to the left. The dorsal body cavity is located along the dorsal (posterior) surface of the human body, where it is subdivided into the cranial cavity housing the brain and the spinal cavity housing the spinal cord. Dorsal Cavity: The dorsal cavity is an enclosed chamber that contains a portion of the body's organs and structures.The dorsal cavity is located in the posterior area of the trunk, head and neck and includes the cranial cavity and the spinal cavity.Conditions that can afflict the dorsal cavity include infection, infarction, trauma, cancer, genetic disorders, birth defects, and syndromes.orsal Cavity: The dorsal cavity is an enclosed chamber that contains a portion of the body's organs and structures.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 0.003,
        "query_regularizer_weight": 0.005
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 8
  • learning_rate: 8e-05
  • num_train_epochs: 8
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.025
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 8e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.025
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 4127 18.923 0.6727
2.0 8254 0.5405 0.6863
3.0 12381 0.4993 0.6944
4.0 16508 0.4659 0.6981
5.0 20635 0.4383 0.7003
6.0 24762 0.4186 0.7029
7.0 28889 0.4058 0.7037
8.0 33016 0.4003 0.7049
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.8.1
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
5
Safetensors
Model size
4.42M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yosefw/SPLADE-BERT-Tiny-BS256

Finetuned
(77)
this model

Evaluation results