metadata
language:
- en
license: mit
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:496123
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: prajjwal1/bert-medium
widget:
- text: >-
What is the name, background and ethnicity of the actress who plays Raj’s
sister Priya on “The Big Bang Theory”? —Charles Dix, Stewartsville, Mo.
Aarti Mann, 36, a first-generation Indian American, was born in
Connecticut and raised in Pennsylvania, and plays Priya Koothrappali on
“The Big Bang Theory.”. Of landing the role as Raj’s sister, she says, “It
is like winning the opportunity to go to the acting Olympics.
- text: >-
Resolved Question: Severe pain in right side of hip radiating down leg and
into foot. It hurts to stand, walk, sit or lie down. I've had it for
several weeks & have used heat, ice, muscle rub-ons & patches.
- text: >-
The Antarctic Treaty. The 12 nations listed in the preamble (below) signed
the Antarctic Treaty on 1 December 1959 at Washington, D.C. The Treaty
entered into force on 23 June 1961; the 12 signatories became the original
12 consultative nations.nother 21 nations have acceded to the Antarctic
Treaty: Austria, Belarus, Canada, Colombia, Cuba, Democratic Peoples
Republic of Korea, Denmark, Estonia, Greece, Guatemala, Hungary, Malaysia,
Monaco, Pakistan, Papua New Guinea, Portugal, Romania, Slovak Republic,
Switzerland, Turkey, and Venezuela.
- text: >-
Orlando, Florida, USA — Sunrise, Sunset, and Daylength, May 2017. May 2017
— Sun in Orlando.
- text: >-
Line baking dish ... to also cover roast). Place roast ... the roast.
Place in preheated 300 degree oven for 2 1/2 to 3 hours. About 50 minutes
per pound.rim all excess fat from roast. Place potatoes ... Crockery Pot
on top of potatoes and onions. Cover and cook on low setting for 10 to 12
hours (high 5 to 6).
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
model-index:
- name: SPLADE-BERT-Medium
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: dot_accuracy@1
value: 0.4716
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7802
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8684
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9396
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4716
name: Dot Precision@1
- type: dot_precision@3
value: 0.26713333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.18059999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09851999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.4563333333333333
name: Dot Recall@1
- type: dot_recall@3
value: 0.7666333333333334
name: Dot Recall@3
- type: dot_recall@5
value: 0.8592166666666667
name: Dot Recall@5
- type: dot_recall@10
value: 0.9338666666666667
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7088774640922301
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6397524603174632
name: Dot Mrr@10
- type: dot_map@100
value: 0.6359976077086615
name: Dot Map@100
- type: query_active_dims
value: 23.28499984741211
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9992371076650478
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 175.6306999586799
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9942457669891004
name: Corpus Sparsity Ratio
SPLADE-BERT-Medium
This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-medium using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: prajjwal1/bert-medium
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Language: en
- License: mit
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("yosefw/SPLADE-BERT-Medium-BS384")
# Run inference
queries = [
"how long to bake arm roast",
]
documents = [
'Line baking dish ... to also cover roast). Place roast ... the roast. Place in preheated 300 degree oven for 2 1/2 to 3 hours. About 50 minutes per pound.rim all excess fat from roast. Place potatoes ... Crockery Pot on top of potatoes and onions. Cover and cook on low setting for 10 to 12 hours (high 5 to 6).',
'Considerations. The total time it takes to cook an arm roast depends on its size. A 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low.Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid.Most importantly, resist the temptation to lift the lid while your roast is cooking. 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low. Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid. Most importantly, resist the temptation to lift the lid while your roast is cooking.',
'Set your Crock Pot on high to reach a simmer point of 209 degrees F in 3 to 4 hours, or low to reach the same cooking temperature in 7 to 8 hours. The total time it takes to cook an arm roast depends on its size. A 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low.Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid.Most importantly, resist the temptation to lift the lid while your roast is cooking. 3- to 4-lb. chuck roast takes 5 to 6 hours on high and 10 to 12 hours on low. Chuck roasts usually contain enough marbled fat to cook without water, but most Crock-Pot roast recipes call for a little liquid. Most importantly, resist the temptation to lift the lid while your roast is cooking.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[16.1861, 15.3382, 15.6794]])
Evaluation
Metrics
Sparse Information Retrieval
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | Value |
---|---|
dot_accuracy@1 | 0.4716 |
dot_accuracy@3 | 0.7802 |
dot_accuracy@5 | 0.8684 |
dot_accuracy@10 | 0.9396 |
dot_precision@1 | 0.4716 |
dot_precision@3 | 0.2671 |
dot_precision@5 | 0.1806 |
dot_precision@10 | 0.0985 |
dot_recall@1 | 0.4563 |
dot_recall@3 | 0.7666 |
dot_recall@5 | 0.8592 |
dot_recall@10 | 0.9339 |
dot_ndcg@10 | 0.7089 |
dot_mrr@10 | 0.6398 |
dot_map@100 | 0.636 |
query_active_dims | 23.285 |
query_sparsity_ratio | 0.9992 |
corpus_active_dims | 175.6307 |
corpus_sparsity_ratio | 0.9942 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 496,123 training samples
- Columns:
query
,positive
,negative_1
, andnegative_2
- Approximate statistics based on the first 1000 samples:
query positive negative_1 negative_2 type string string string string details - min: 4 tokens
- mean: 8.87 tokens
- max: 43 tokens
- min: 24 tokens
- mean: 81.23 tokens
- max: 259 tokens
- min: 20 tokens
- mean: 79.21 tokens
- max: 197 tokens
- min: 20 tokens
- mean: 77.89 tokens
- max: 207 tokens
- Samples:
query positive negative_1 negative_2 heart specialists in ridgeland ms
Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine.
Dr. James Kramer is a Internist in Ridgeland, MS. Find Dr. Kramer's phone number, address and more.
Dr. James Kramer is an internist in Ridgeland, Mississippi. He received his medical degree from Loma Linda University School of Medicine and has been in practice for more than 20 years. Dr. James Kramer's Details
does baytril otic require a prescription
Baytril Otic Ear Drops-Enrofloxacin/Silver Sulfadiazine-Prices & Information. A prescription is required for this item. A prescription is required for this item. Brand medication is not available at this time.
RX required for this item. Click here for our full Prescription Policy and Form. Baytril Otic (enrofloxacin/silver sulfadiazine) Emulsion from Bayer is the first fluoroquinolone approved by the Food and Drug Administration for the topical treatment of canine otitis externa.
Product Details. Baytril Otic is a highly effective treatment prescribed by many veterinarians when your pet has an ear infection caused by susceptible bacteria or fungus. Baytril Otic is: a liquid emulsion that is used topically directly in the ear or on the skin in order to treat susceptible bacterial and yeast infections.
what is on a gyro
Report Abuse. Gyros or gyro (giros) (pronounced /ˈjɪəroʊ/ or /ˈdʒaɪroʊ/, Greek: γύρος turn) is a Greek dish consisting of meat (typically lamb and/or beef), tomato, onion, and tzatziki sauce, and is served with pita bread. Chicken and pork meat can be used too.
A gyroscope (from Ancient Greek γῦρος gûros, circle and σκοπέω skopéō, to look) is a spinning wheel or disc in which the axis of rotation is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum.
Diagram of a gyro wheel. Reaction arrows about the output axis (blue) correspond to forces applied about the input axis (green), and vice versa. A gyroscope is a wheel mounted in two or three gimbals, which are a pivoted supports that allow the rotation of the wheel about a single axis.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score', gather_across_devices=False)", "document_regularizer_weight": 0.003, "query_regularizer_weight": 0.005 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 48per_device_eval_batch_size
: 48gradient_accumulation_steps
: 8learning_rate
: 8e-05num_train_epochs
: 8lr_scheduler_type
: cosinewarmup_ratio
: 0.025fp16
: Trueload_best_model_at_end
: Truepush_to_hub
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 48per_device_eval_batch_size
: 48per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 8eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 8e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 8max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.025warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Trueresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | dot_ndcg@10 |
---|---|---|---|
1.0 | 1292 | 42.0325 | 0.7155 |
2.0 | 2584 | 1.1261 | 0.7216 |
3.0 | 3876 | 1.049 | 0.7214 |
4.0 | 5168 | 0.9631 | 0.7188 |
5.0 | 6460 | 0.8725 | 0.7120 |
-1 | -1 | - | 0.7089 |
Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}