SparseEncoder based on microsoft/mpnet-base

This is a sentence-transformers model finetuned from microsoft/mpnet-base on the natural-questions dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

See train_nq.py for the training script used for this model.

Warning: Sparse models in Sentence Transformers are still quite experimental.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: microsoft/mpnet-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en

Model Sources

Full Model Architecture

SparseEncoder(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): CSRSparsity({'input_dim': 768, 'hidden_dim': 3072, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/sparse-mpnet-base-nq-fresh")
# Run inference
sentences = [
    'who is cornelius in the book of acts',
    'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
    "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Sparse Information Retrieval

Metric NanoMSMARCO_16 NanoNFCorpus_16 NanoNQ_16
cosine_accuracy@1 0.1 0.08 0.18
cosine_accuracy@3 0.26 0.14 0.42
cosine_accuracy@5 0.36 0.24 0.54
cosine_accuracy@10 0.5 0.32 0.64
cosine_precision@1 0.1 0.08 0.18
cosine_precision@3 0.0867 0.06 0.14
cosine_precision@5 0.072 0.08 0.108
cosine_precision@10 0.05 0.05 0.064
cosine_recall@1 0.1 0.006 0.18
cosine_recall@3 0.26 0.0094 0.4
cosine_recall@5 0.36 0.0133 0.5
cosine_recall@10 0.5 0.0165 0.6
cosine_ndcg@10 0.2721 0.061 0.3867
cosine_mrr@10 0.2023 0.1407 0.3267
cosine_map@100 0.2176 0.0153 0.325

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_16
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "truncate_dim": 16
    }
    
Metric Value
cosine_accuracy@1 0.12
cosine_accuracy@3 0.2733
cosine_accuracy@5 0.38
cosine_accuracy@10 0.4867
cosine_precision@1 0.12
cosine_precision@3 0.0956
cosine_precision@5 0.0867
cosine_precision@10 0.0547
cosine_recall@1 0.0953
cosine_recall@3 0.2231
cosine_recall@5 0.2911
cosine_recall@10 0.3722
cosine_ndcg@10 0.2399
cosine_mrr@10 0.2233
cosine_map@100 0.186

Sparse Information Retrieval

Metric NanoMSMARCO_32 NanoNFCorpus_32 NanoNQ_32
cosine_accuracy@1 0.18 0.14 0.32
cosine_accuracy@3 0.26 0.26 0.46
cosine_accuracy@5 0.36 0.28 0.58
cosine_accuracy@10 0.56 0.34 0.68
cosine_precision@1 0.18 0.14 0.32
cosine_precision@3 0.0867 0.1133 0.1533
cosine_precision@5 0.072 0.096 0.116
cosine_precision@10 0.056 0.09 0.068
cosine_recall@1 0.18 0.0077 0.31
cosine_recall@3 0.26 0.0123 0.42
cosine_recall@5 0.36 0.017 0.53
cosine_recall@10 0.56 0.0242 0.63
cosine_ndcg@10 0.3311 0.1023 0.4604
cosine_mrr@10 0.2634 0.2055 0.4212
cosine_map@100 0.2794 0.0226 0.4113

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_32
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "truncate_dim": 32
    }
    
Metric Value
cosine_accuracy@1 0.2133
cosine_accuracy@3 0.3267
cosine_accuracy@5 0.4067
cosine_accuracy@10 0.5267
cosine_precision@1 0.2133
cosine_precision@3 0.1178
cosine_precision@5 0.0947
cosine_precision@10 0.0713
cosine_recall@1 0.1659
cosine_recall@3 0.2308
cosine_recall@5 0.3023
cosine_recall@10 0.4047
cosine_ndcg@10 0.2979
cosine_mrr@10 0.2967
cosine_map@100 0.2377

Sparse Information Retrieval

Metric NanoMSMARCO_64 NanoNFCorpus_64 NanoNQ_64
cosine_accuracy@1 0.16 0.18 0.44
cosine_accuracy@3 0.38 0.26 0.62
cosine_accuracy@5 0.46 0.32 0.68
cosine_accuracy@10 0.6 0.4 0.72
cosine_precision@1 0.16 0.18 0.44
cosine_precision@3 0.1267 0.1267 0.2067
cosine_precision@5 0.092 0.12 0.14
cosine_precision@10 0.06 0.088 0.074
cosine_recall@1 0.16 0.0095 0.42
cosine_recall@3 0.38 0.0129 0.58
cosine_recall@5 0.46 0.0369 0.64
cosine_recall@10 0.6 0.0476 0.68
cosine_ndcg@10 0.3545 0.115 0.5619
cosine_mrr@10 0.278 0.2421 0.5396
cosine_map@100 0.2957 0.0318 0.5268

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_64
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "truncate_dim": 64
    }
    
Metric Value
cosine_accuracy@1 0.26
cosine_accuracy@3 0.42
cosine_accuracy@5 0.4867
cosine_accuracy@10 0.5733
cosine_precision@1 0.26
cosine_precision@3 0.1533
cosine_precision@5 0.1173
cosine_precision@10 0.074
cosine_recall@1 0.1965
cosine_recall@3 0.3243
cosine_recall@5 0.379
cosine_recall@10 0.4425
cosine_ndcg@10 0.3438
cosine_mrr@10 0.3532
cosine_map@100 0.2848

Sparse Information Retrieval

Metric NanoMSMARCO_128 NanoNFCorpus_128 NanoNQ_128
cosine_accuracy@1 0.2 0.14 0.38
cosine_accuracy@3 0.34 0.34 0.56
cosine_accuracy@5 0.46 0.38 0.7
cosine_accuracy@10 0.68 0.52 0.8
cosine_precision@1 0.2 0.14 0.38
cosine_precision@3 0.1133 0.1667 0.1867
cosine_precision@5 0.092 0.128 0.144
cosine_precision@10 0.068 0.114 0.082
cosine_recall@1 0.2 0.0037 0.35
cosine_recall@3 0.34 0.0212 0.53
cosine_recall@5 0.46 0.0246 0.66
cosine_recall@10 0.68 0.0433 0.76
cosine_ndcg@10 0.4022 0.1267 0.5527
cosine_mrr@10 0.3182 0.2538 0.5072
cosine_map@100 0.3323 0.0333 0.4847

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_128
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "truncate_dim": 128
    }
    
Metric Value
cosine_accuracy@1 0.24
cosine_accuracy@3 0.4133
cosine_accuracy@5 0.5133
cosine_accuracy@10 0.6667
cosine_precision@1 0.24
cosine_precision@3 0.1556
cosine_precision@5 0.1213
cosine_precision@10 0.088
cosine_recall@1 0.1846
cosine_recall@3 0.2971
cosine_recall@5 0.3815
cosine_recall@10 0.4944
cosine_ndcg@10 0.3605
cosine_mrr@10 0.3597
cosine_map@100 0.2834

Sparse Information Retrieval

Metric NanoMSMARCO_256 NanoNFCorpus_256 NanoNQ_256
cosine_accuracy@1 0.26 0.18 0.42
cosine_accuracy@3 0.48 0.28 0.58
cosine_accuracy@5 0.52 0.38 0.68
cosine_accuracy@10 0.68 0.5 0.76
cosine_precision@1 0.26 0.18 0.42
cosine_precision@3 0.16 0.1467 0.1933
cosine_precision@5 0.104 0.14 0.14
cosine_precision@10 0.068 0.114 0.08
cosine_recall@1 0.26 0.0055 0.4
cosine_recall@3 0.48 0.0114 0.54
cosine_recall@5 0.52 0.0213 0.64
cosine_recall@10 0.68 0.0347 0.73
cosine_ndcg@10 0.4652 0.1263 0.5612
cosine_mrr@10 0.398 0.2575 0.5227
cosine_map@100 0.4125 0.0337 0.5087

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_256
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "truncate_dim": 256
    }
    
Metric Value
cosine_accuracy@1 0.2867
cosine_accuracy@3 0.4467
cosine_accuracy@5 0.5267
cosine_accuracy@10 0.6467
cosine_precision@1 0.2867
cosine_precision@3 0.1667
cosine_precision@5 0.128
cosine_precision@10 0.0873
cosine_recall@1 0.2218
cosine_recall@3 0.3438
cosine_recall@5 0.3938
cosine_recall@10 0.4816
cosine_ndcg@10 0.3842
cosine_mrr@10 0.3927
cosine_map@100 0.3183

Training Details

Training Dataset

natural-questions

  • Dataset: natural-questions at f9e894e
  • Size: 99,000 training samples
  • Columns: query and answer
  • Approximate statistics based on the first 1000 samples:
    query answer
    type string string
    details
    • min: 10 tokens
    • mean: 11.71 tokens
    • max: 26 tokens
    • min: 4 tokens
    • mean: 131.81 tokens
    • max: 450 tokens
  • Samples:
    query answer
    who played the father in papa don't preach Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.
    where was the location of the battle of hastings Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.
    how many puppies can a dog give birth to Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]
  • Loss: CSRLoss with these parameters:
    {
        "beta": 0.1,
        "gamma": 1,
        "scale": 20.0
    }
    

Evaluation Dataset

natural-questions

  • Dataset: natural-questions at f9e894e
  • Size: 1,000 evaluation samples
  • Columns: query and answer
  • Approximate statistics based on the first 1000 samples:
    query answer
    type string string
    details
    • min: 10 tokens
    • mean: 11.69 tokens
    • max: 23 tokens
    • min: 15 tokens
    • mean: 134.01 tokens
    • max: 512 tokens
  • Samples:
    query answer
    where is the tiber river located in italy Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.
    what kind of car does jay gatsby drive Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.
    who sings if i can dream about you I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]
  • Loss: CSRLoss with these parameters:
    {
        "beta": 0.1,
        "gamma": 1,
        "scale": 20.0
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 4e-05
  • weight_decay: 0.0001
  • adam_epsilon: 6.25e-10
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 4e-05
  • weight_decay: 0.0001
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 6.25e-10
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_16_cosine_ndcg@10 NanoNFCorpus_16_cosine_ndcg@10 NanoNQ_16_cosine_ndcg@10 NanoBEIR_mean_16_cosine_ndcg@10 NanoMSMARCO_32_cosine_ndcg@10 NanoNFCorpus_32_cosine_ndcg@10 NanoNQ_32_cosine_ndcg@10 NanoBEIR_mean_32_cosine_ndcg@10 NanoMSMARCO_64_cosine_ndcg@10 NanoNFCorpus_64_cosine_ndcg@10 NanoNQ_64_cosine_ndcg@10 NanoBEIR_mean_64_cosine_ndcg@10 NanoMSMARCO_128_cosine_ndcg@10 NanoNFCorpus_128_cosine_ndcg@10 NanoNQ_128_cosine_ndcg@10 NanoBEIR_mean_128_cosine_ndcg@10 NanoMSMARCO_256_cosine_ndcg@10 NanoNFCorpus_256_cosine_ndcg@10 NanoNQ_256_cosine_ndcg@10 NanoBEIR_mean_256_cosine_ndcg@10
-1 -1 - - 0.0318 0.0148 0.0149 0.0205 0.0794 0.0234 0.0102 0.0377 0.0855 0.0195 0.0508 0.0519 0.1081 0.0246 0.0264 0.0530 0.1006 0.0249 0.0388 0.0547
0.0646 200 0.7332 - - - - - - - - - - - - - - - - - - - - -
0.1293 400 0.2606 0.1970 0.2845 0.0970 0.3546 0.2454 0.3778 0.1358 0.3455 0.2864 0.3868 0.1563 0.3806 0.3079 0.3988 0.1664 0.4035 0.3229 0.4020 0.1782 0.4181 0.3327
0.1939 600 0.2247 - - - - - - - - - - - - - - - - - - - - -
0.2586 800 0.1983 0.1750 0.2908 0.0866 0.3730 0.2502 0.3324 0.1155 0.4275 0.2918 0.3511 0.1621 0.4998 0.3377 0.3920 0.1563 0.5174 0.3553 0.4152 0.1555 0.5153 0.3620
0.3232 1000 0.1822 - - - - - - - - - - - - - - - - - - - - -
0.3878 1200 0.1846 0.1594 0.2775 0.0785 0.3723 0.2428 0.2642 0.1076 0.4389 0.2702 0.3865 0.1328 0.4329 0.3174 0.3883 0.1446 0.5040 0.3456 0.3638 0.1529 0.4939 0.3369
0.4525 1400 0.1669 - - - - - - - - - - - - - - - - - - - - -
0.5171 1600 0.1573 0.1452 0.2740 0.0624 0.3670 0.2345 0.3557 0.0855 0.4188 0.2867 0.4094 0.1099 0.5027 0.3407 0.3885 0.1340 0.4990 0.3405 0.4820 0.1577 0.5453 0.3950
0.5818 1800 0.1502 - - - - - - - - - - - - - - - - - - - - -
0.6464 2000 0.1375 0.1255 0.2307 0.0685 0.3801 0.2264 0.2529 0.0815 0.4335 0.2560 0.3509 0.0955 0.4611 0.3025 0.3932 0.1339 0.4875 0.3382 0.4184 0.1483 0.4904 0.3523
0.7111 2200 0.1359 - - - - - - - - - - - - - - - - - - - - -
0.7757 2400 0.1288 0.1184 0.2737 0.0703 0.3419 0.2286 0.3765 0.0843 0.4440 0.3016 0.3927 0.1247 0.5285 0.3486 0.3726 0.1203 0.5153 0.3361 0.4676 0.1343 0.5523 0.3847
0.8403 2600 0.1235 - - - - - - - - - - - - - - - - - - - - -
0.9050 2800 0.1168 0.1094 0.2751 0.0710 0.3602 0.2354 0.3227 0.0966 0.5046 0.3080 0.4112 0.1129 0.5268 0.3503 0.4077 0.1259 0.5253 0.3530 0.4642 0.1238 0.5726 0.3869
0.9696 3000 0.1187 - - - - - - - - - - - - - - - - - - - - -
-1 -1 - - 0.2721 0.0610 0.3867 0.2399 0.3311 0.1023 0.4604 0.2979 0.3545 0.1150 0.5619 0.3438 0.4022 0.1267 0.5527 0.3605 0.4652 0.1263 0.5612 0.3842

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.292 kWh
  • Carbon Emitted: 0.113 kg of CO2
  • Hours Used: 0.773 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 4.1.0.dev0
  • Transformers: 4.52.0.dev0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.1
  • Datasets: 3.3.2
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/sparse-mpnet-base-nq-fresh

Finetuned
(68)
this model

Dataset used to train tomaarsen/sparse-mpnet-base-nq-fresh

Evaluation results