CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-seeded")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.5060 (+0.0164) | 0.3383 (+0.0773) | 0.5939 (+0.1743) |
mrr@10 | 0.4900 (+0.0125) | 0.5705 (+0.0707) | 0.6004 (+0.1737) |
ndcg@10 | 0.5497 (+0.0093) | 0.3736 (+0.0485) | 0.6574 (+0.1568) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4794 (+0.0894) |
mrr@10 | 0.5536 (+0.0856) |
ndcg@10 | 0.5269 (+0.0715) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 11 characters
- mean: 34.13 characters
- max: 88 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels food that are vegetarian that have vitamin a
['Vitamin A is a fat soluble vitamin, and therefore, needs to be consumed with fat in order to have optimal absorption. High vitamin A foods include sweet potatoes, carrots, dark leafy greens, winter squashes, lettuce, dried apricots, cantaloupe, bell peppers, fish, liver, and tropical fruits. The current daily value for Vitamin A is 5000 international units (IU).', 'Unlike some other B vitamins, B12 is not found in any plant food other than fortified cereals. It is, however, abundant in many meats and fish, and in smaller amounts in milk and eggs. This makes it difficult for people following a strict vegetarian diet to get the necessary amount of vitamin B12.', 'They found that 92% of the vegans they studied -- those who ate the strictest vegetarian diet, which shuns all animal products, including milk and eggs -- had vitamin B12 deficiency. But two in three people who followed a vegetarian diet that included milk and eggs as their only animal foods also were deficient.', 'Vitamin B 1...
[1, 0, 0, 0, 0, ...]
what is trilobar prostatic enlargement
["Prostate enlargement: Most prostatic enlargement is due to benign prostatic hyperplasia (BPH), a problem that bothers men increasingly with advancing age. The process of BPH generally begins in a man's 30s, evolves very slowly and usually causes symptoms only after he has passed the half-century mark. It is not a precursor (a forerunner) to prostate cancer. Treatment of BPH is usually reserved for men with significant symptoms. Watchful waiting with medical monitoring once a year is appropriate for most men with BPH. The medical therapy of BPH includes medication.", '1 A benign (noncancerous) condition in which an overgrowth of prostate tissue pushes against the urethra and the bladder, blocking the flow of urine. 2 Increase in constituent cells in the prostate, leading to enlargement of the organ (hypertrophy) and adverse impact on the lower urinary tract function. 1 Increase in constituent cells in the prostate, leading to enlargement of the organ (hypertrophy) and adverse impact ...
[1, 0, 0, 0, 0, ...]
what is the classification of seasoning
['Artificial kiln seasoning. Its the most traditional way of seasoning wood or timber. In this method wood is dryed usually by the keeping the wood exposed to air, so that the moisture evaporates and wood is seasoned. This method is very economical is a sense that no operational charges exists but the process is too slow & ..... [Read More].', 'Vegetables used in seasoning such as onions, garlic, and celery may also be included in this category in some circumstances. Some people break types of spices up by what one does when it is added to food. Sweet, hot, pungent, and tangy are the four primary categories. I think the most common herbs and spices are garlic, onions, and Italian spices like Oregano. Garlic and onion can also be found in salts and powders for simpler things like marinades and just for basic baking.', 'Spices and herbs at a grocery shop in Goa, India. A spice is a seed, fruit, root, bark, berry, bud or vegetable substance primarily used for flavoring, coloring or preser...
[1, 1, 1, 0, 0, ...]
- Loss:
ListNetLoss
with these parameters:{ "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": 16 }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 11 characters
- mean: 33.79 characters
- max: 95 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels absolute viscosity definition
['Noun. 1. absolute viscosity-a measure of the resistance to flow of a fluid under an applied force. coefficient of viscosity, dynamic viscosity. coefficient-a constant number that serves as a measure of some property or characteristic.', '1 the state or property of being viscous. 2 (Physics). a the extent to which a fluid resists a tendency to flow. b (Also called) absolute viscosity a measure of this resistance, equal to the tangential stress on a liquid undergoing streamline flow divided by its velocity gradient. It is measured in newton seconds per metre squared. , (Symbol) η.', 'Kinematic Viscosity. Kinematic viscosity is the ratio of-absolute (or dynamic) viscosity to density-a quantity in which no force is involved. Kinematic viscosity can be obtained by dividing the absolute viscosity of a fluid with the fluid mass density.', '2] = shear stress acted by fluid on lower surface of the blank element du = velocity of the blank element relative to blank holder and die surface [mu] =...
[1, 0, 0, 0, 0, ...]
meaning of chartered engineer
['noun. ( 1 in Britain) an engineer who is registered with the Engineering Council as having the scientific and technical knowledge and practical experience to satisfy its professional requirements.', '1 Trends. ( 2 in Britain) an engineer who is registered with the Engineering Council as having the scientific and technical knowledge and practical experience to satisfy its professional requirements.', '1 (in Britain) an engineer who is registered with the Engineering Council as having the scientific and technical knowledge and practical experience to satisfy its professional requirements. 2 Abbreviation: CEng.', 'chartered engineer n (in Britain) an engineer who is registered with the Engineering Council as having the scientific and technical knowledge and practical experience to satisfy its professional requirements, (Abbrev.) CEng. chartered engineer.', 'chartered engineer. ( 1 in Britain) an engineer who is registered with the Engineering Council as having the scientific and techni...
[1, 0, 0, 0, 0, ...]
how much do personal assistants make
['States That Pay the Most. Without exception, the personal assistants wanting to earn the highest dollar amount per year should live on the East Coast. New York tops the list, with an annual salary range of over $66,000 per year or just over $31.00 per hour. PAs in Maryland make the least per year at nearly $59,000. According to the Bureau of Labor Statistics, as of 2013, executive assistants/secretaries earn the highest salaries at nearly $51,870 per year on average. Other highly trained assistants include legal and medical secretaries, who can expect to earn just over $45,000 and $33,000 respectively.', "Before an agreement on pay is reached, research the national and local averages for full time personal assistant pay. According to the US Bureau of Labor Statistics (BLS), an assistant in California made an average hourly rate of $27.01 as of May 2012, while the same position in Florida earned a rate of $20.60. Determine what your budget for an assistant is and compare that number t...
[1, 0, 0, 0, 0, ...]
- Loss:
ListNetLoss
with these parameters:{ "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": 16 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0300 (-0.5104) | 0.2528 (-0.0723) | 0.0168 (-0.4839) | 0.0999 (-0.3555) |
0.0002 | 1 | 2.0665 | - | - | - | - | - |
0.0508 | 250 | 2.0907 | - | - | - | - | - |
0.1016 | 500 | 2.0889 | 2.0762 | 0.4880 (-0.0524) | 0.3157 (-0.0094) | 0.5145 (+0.0139) | 0.4394 (-0.0160) |
0.1525 | 750 | 2.0817 | - | - | - | - | - |
0.2033 | 1000 | 2.0771 | 2.0739 | 0.5346 (-0.0058) | 0.3581 (+0.0331) | 0.5875 (+0.0869) | 0.4934 (+0.0380) |
0.2541 | 1250 | 2.0813 | - | - | - | - | - |
0.3049 | 1500 | 2.073 | 2.0730 | 0.5088 (-0.0316) | 0.3440 (+0.0189) | 0.5719 (+0.0713) | 0.4749 (+0.0195) |
0.3558 | 1750 | 2.0698 | - | - | - | - | - |
0.4066 | 2000 | 2.0752 | 2.0725 | 0.5421 (+0.0017) | 0.3741 (+0.0490) | 0.6318 (+0.1311) | 0.5160 (+0.0606) |
0.4574 | 2250 | 2.073 | - | - | - | - | - |
0.5082 | 2500 | 2.0712 | 2.0725 | 0.5311 (-0.0094) | 0.3506 (+0.0256) | 0.6258 (+0.1252) | 0.5025 (+0.0471) |
0.5591 | 2750 | 2.0682 | - | - | - | - | - |
0.6099 | 3000 | 2.0738 | 2.0727 | 0.5682 (+0.0277) | 0.3634 (+0.0384) | 0.6241 (+0.1235) | 0.5186 (+0.0632) |
0.6607 | 3250 | 2.0702 | - | - | - | - | - |
0.7115 | 3500 | 2.0722 | 2.0721 | 0.5591 (+0.0187) | 0.3563 (+0.0312) | 0.6453 (+0.1446) | 0.5202 (+0.0649) |
0.7624 | 3750 | 2.0714 | - | - | - | - | - |
0.8132 | 4000 | 2.0632 | 2.0724 | 0.5497 (+0.0093) | 0.3736 (+0.0485) | 0.6574 (+0.1568) | 0.5269 (+0.0715) |
0.8640 | 4250 | 2.0681 | - | - | - | - | - |
0.9148 | 4500 | 2.066 | 2.0720 | 0.5510 (+0.0106) | 0.3718 (+0.0468) | 0.6483 (+0.1476) | 0.5237 (+0.0683) |
0.9656 | 4750 | 2.0736 | - | - | - | - | - |
-1 | -1 | - | - | 0.5497 (+0.0093) | 0.3736 (+0.0485) | 0.6574 (+0.1568) | 0.5269 (+0.0715) |
- The bold row denotes the saved checkpoint.
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.238 kWh
- Carbon Emitted: 0.092 kg of CO2
- Hours Used: 0.977 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ListNetLoss
@inproceedings{cao2007learning,
title={Learning to rank: from pairwise approach to listwise approach},
author={Cao, Zhe and Qin, Tao and Liu, Tie-Yan and Tsai, Ming-Feng and Li, Hang},
booktitle={Proceedings of the 24th international conference on Machine learning},
pages={129--136},
year={2007}
}
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-ranking models for sentence-transformers library.
Model tree for tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-seeded
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-seeded
Evaluation results
- Map on NanoMSMARCO R100self-reported0.506
- Mrr@10 on NanoMSMARCO R100self-reported0.490
- Ndcg@10 on NanoMSMARCO R100self-reported0.550
- Map on NanoNFCorpus R100self-reported0.338
- Mrr@10 on NanoNFCorpus R100self-reported0.571
- Ndcg@10 on NanoNFCorpus R100self-reported0.374
- Map on NanoNQ R100self-reported0.594
- Mrr@10 on NanoNQ R100self-reported0.600
- Ndcg@10 on NanoNQ R100self-reported0.657
- Map on NanoBEIR R100 meanself-reported0.479