SPLADE-BERT-Tiny

This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-tiny using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Language: en
  • License: mit

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/SPLADE-BERT-Tiny")
# Run inference
queries = [
    "what code section is depreciation",
]
documents = [
    'Section 179 depreciation deduction. Section 179 of the United States Internal Revenue Code (26 U.S.C. § 179), allows a taxpayer to elect to deduct the cost of certain types of property on their income taxes as an expense, rather than requiring the cost of the property to be capitalized and depreciated.',
    '--No depreciation deduction shall be allowed under this section (and no depreciation or amortization deduction shall be allowed under any other provision of this subtitle) to the taxpayer for any term interest in property for any period during which the remainder interest in such property is held (directly or indirectly) by a related person.',
    'Depreciation - Amortization Code. Refer to the IRS Instructions for Form 4562, Line 42, for the amortization code.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[17.0167, 11.4943, 13.8083]])

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.457
dot_accuracy@3 0.7572
dot_accuracy@5 0.8574
dot_accuracy@10 0.929
dot_precision@1 0.457
dot_precision@3 0.2591
dot_precision@5 0.178
dot_precision@10 0.0971
dot_recall@1 0.4415
dot_recall@3 0.7428
dot_recall@5 0.8472
dot_recall@10 0.9223
dot_ndcg@10 0.6932
dot_mrr@10 0.6235
dot_map@100 0.6191
query_active_dims 21.216
query_sparsity_ratio 0.9993
corpus_active_dims 159.5419
corpus_sparsity_ratio 0.9948

Training Details

Training Dataset

Unnamed Dataset

  • Size: 800,000 training samples
  • Columns: query, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2
    type string string string string
    details
    • min: 4 tokens
    • mean: 9.03 tokens
    • max: 30 tokens
    • min: 15 tokens
    • mean: 81.92 tokens
    • max: 220 tokens
    • min: 22 tokens
    • mean: 78.63 tokens
    • max: 227 tokens
    • min: 18 tokens
    • mean: 78.11 tokens
    • max: 236 tokens
  • Samples:
    query positive negative_1 negative_2
    definition of vas deferens Vas deferens: The tube that connects the testes with the urethra. The vas deferens is a coiled duct that conveys sperm from the epididymis to the ejaculatory duct and the urethra. For further discussion of the vas deferens within the context of the structures and functions of reproduction and sexuality, please see the overview section “The Reproductive System.”. See also FERTILITY; TESTICLES; VASECTOMY. 1 Testicular cancer symptoms include a painless lump or swelling in a testicle, testicle or scrotum pain, a dull ache in the abdomen, back, or groin, and. 2 Urinary Tract Infections (UTIs) A urinary tract infection (UTI) is an infection of the bladder, kidneys, ureters, or urethra.
    how old is kieron williamson Kieron Williamson – the latest artist to be part of GoGoDragons! April 21, 2015. A 12-year-old artist, nicknamed Mini-Monet, is to unveil a sculpture of a dragon he has painted for GoGoDragons. Kieron Williamson, from Norfolk, who has so far earned about £2m, painted the 5ft-tall (1.5m) dragon for the event in Norwich. 8-year-old artist: Don't call me Monet. London, England (CNN) -- He has the deft brush strokes of a seasoned artist, but Kieron Williamson is just eight years old. The boy from Norfolk, in eastern England, has been hailed by the British press as a mini Monet, a reference to the famous French impressionist. Needless to say, this site does not tell you much about his football career (yet!), but the website will tell you everything there is to know about Kieron Williamson’s passion for oil, watercolour and pastel,
    when do you start showing third pregnancy Yes No Thank you! I am pregnant with my third child and I am definitly showing at 10 weeks. I am starting to wear some maternity clothes. My low low rise pre-pregnancy jeans still work. My biggest problem is shirts, but fortunately the style right now is loose shirts that look maternity. Some women do not start to show until they are well into their second trimester or even the start of their third trimester. If you are overweight at the start of your pregnancy, you may not gain as much weight during your pregnancy and may not begin to show until later into your pregnancy. Average: 3.591215.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 0.003,
        "query_regularizer_weight": 0.005
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 4
  • learning_rate: 6e-05
  • num_train_epochs: 6
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.025
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 6e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.025
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 12500 11.5771 0.6587
2.0 25000 0.7888 0.6810
3.0 37500 0.7271 0.6884
4.0 50000 0.6774 0.6920
5.0 62500 0.6436 0.6912
6.0 75000 0.6274 0.6932
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.1
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
15
Safetensors
Model size
4.42M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasyosef/SPLADE-BERT-Tiny

Finetuned
(67)
this model

Dataset used to train rasyosef/SPLADE-BERT-Tiny

Evaluation results