SPLADE-BERT-Tiny
This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-tiny using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: prajjwal1/bert-tiny
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Language: en
- License: mit
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/SPLADE-BERT-Tiny")
# Run inference
queries = [
"what code section is depreciation",
]
documents = [
'Section 179 depreciation deduction. Section 179 of the United States Internal Revenue Code (26 U.S.C. § 179), allows a taxpayer to elect to deduct the cost of certain types of property on their income taxes as an expense, rather than requiring the cost of the property to be capitalized and depreciated.',
'--No depreciation deduction shall be allowed under this section (and no depreciation or amortization deduction shall be allowed under any other provision of this subtitle) to the taxpayer for any term interest in property for any period during which the remainder interest in such property is held (directly or indirectly) by a related person.',
'Depreciation - Amortization Code. Refer to the IRS Instructions for Form 4562, Line 42, for the amortization code.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[17.0167, 11.4943, 13.8083]])
Evaluation
Metrics
Sparse Information Retrieval
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | Value |
---|---|
dot_accuracy@1 | 0.457 |
dot_accuracy@3 | 0.7572 |
dot_accuracy@5 | 0.8574 |
dot_accuracy@10 | 0.929 |
dot_precision@1 | 0.457 |
dot_precision@3 | 0.2591 |
dot_precision@5 | 0.178 |
dot_precision@10 | 0.0971 |
dot_recall@1 | 0.4415 |
dot_recall@3 | 0.7428 |
dot_recall@5 | 0.8472 |
dot_recall@10 | 0.9223 |
dot_ndcg@10 | 0.6932 |
dot_mrr@10 | 0.6235 |
dot_map@100 | 0.6191 |
query_active_dims | 21.216 |
query_sparsity_ratio | 0.9993 |
corpus_active_dims | 159.5419 |
corpus_sparsity_ratio | 0.9948 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 800,000 training samples
- Columns:
query
,positive
,negative_1
, andnegative_2
- Approximate statistics based on the first 1000 samples:
query positive negative_1 negative_2 type string string string string details - min: 4 tokens
- mean: 9.03 tokens
- max: 30 tokens
- min: 15 tokens
- mean: 81.92 tokens
- max: 220 tokens
- min: 22 tokens
- mean: 78.63 tokens
- max: 227 tokens
- min: 18 tokens
- mean: 78.11 tokens
- max: 236 tokens
- Samples:
query positive negative_1 negative_2 definition of vas deferens
Vas deferens: The tube that connects the testes with the urethra. The vas deferens is a coiled duct that conveys sperm from the epididymis to the ejaculatory duct and the urethra.
For further discussion of the vas deferens within the context of the structures and functions of reproduction and sexuality, please see the overview section “The Reproductive System.”. See also FERTILITY; TESTICLES; VASECTOMY.
1 Testicular cancer symptoms include a painless lump or swelling in a testicle, testicle or scrotum pain, a dull ache in the abdomen, back, or groin, and. 2 Urinary Tract Infections (UTIs) A urinary tract infection (UTI) is an infection of the bladder, kidneys, ureters, or urethra.
how old is kieron williamson
Kieron Williamson – the latest artist to be part of GoGoDragons! April 21, 2015. A 12-year-old artist, nicknamed Mini-Monet, is to unveil a sculpture of a dragon he has painted for GoGoDragons. Kieron Williamson, from Norfolk, who has so far earned about £2m, painted the 5ft-tall (1.5m) dragon for the event in Norwich.
8-year-old artist: Don't call me Monet. London, England (CNN) -- He has the deft brush strokes of a seasoned artist, but Kieron Williamson is just eight years old. The boy from Norfolk, in eastern England, has been hailed by the British press as a mini Monet, a reference to the famous French impressionist.
Needless to say, this site does not tell you much about his football career (yet!), but the website will tell you everything there is to know about Kieron Williamson’s passion for oil, watercolour and pastel,
when do you start showing third pregnancy
Yes
No Thank you! I am pregnant with my third child and I am definitly showing at 10 weeks. I am starting to wear some maternity clothes. My low low rise pre-pregnancy jeans still work. My biggest problem is shirts, but fortunately the style right now is loose shirts that look maternity. Some women do not start to show until they are well into their second trimester or even the start of their third trimester. If you are overweight at the start of your pregnancy, you may not gain as much weight during your pregnancy and may not begin to show until later into your pregnancy. Average: 3.591215.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "document_regularizer_weight": 0.003, "query_regularizer_weight": 0.005 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 4learning_rate
: 6e-05num_train_epochs
: 6lr_scheduler_type
: cosinewarmup_ratio
: 0.025fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedpush_to_hub
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 4eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 6e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 6max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.025warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Trueresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | dot_ndcg@10 |
---|---|---|---|
1.0 | 12500 | 11.5771 | 0.6587 |
2.0 | 25000 | 0.7888 | 0.6810 |
3.0 | 37500 | 0.7271 | 0.6884 |
4.0 | 50000 | 0.6774 | 0.6920 |
5.0 | 62500 | 0.6436 | 0.6912 |
6.0 | 75000 | 0.6274 | 0.6932 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.0.0
- Transformers: 4.53.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rasyosef/SPLADE-BERT-Tiny
Base model
prajjwal1/bert-tinyDataset used to train rasyosef/SPLADE-BERT-Tiny
Evaluation results
- Dot Accuracy@1 on Unknownself-reported0.457
- Dot Accuracy@3 on Unknownself-reported0.757
- Dot Accuracy@5 on Unknownself-reported0.857
- Dot Accuracy@10 on Unknownself-reported0.929
- Dot Precision@1 on Unknownself-reported0.457
- Dot Precision@3 on Unknownself-reported0.259
- Dot Precision@5 on Unknownself-reported0.178
- Dot Precision@10 on Unknownself-reported0.097
- Dot Recall@1 on Unknownself-reported0.442
- Dot Recall@3 on Unknownself-reported0.743