ModernBERT Embed base fitness health Matryoshka

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("kokojake/modernbert-embed-base-fitness-health-matryoshka")
# Run inference
sentences = [
    'facilitate publication;\n•\u2009\x07Mobilise academic expertise for \ndeveloping training programmes and \nmobilising trainers.\n\t Weigh in on the debate around issues \nrelated to rehabilitation promotion and funding, promote best practices to \ninfluence policies that favour access \nto rehabilitation services and thereby \nmove toward advocacy actions.\n48\nUsers,\nDisabled people’s\norganisations\nService\nproviders\nDecision-makers User \ngroups\nLocal\nauthorities\nMinistry of \nHealth, Ministry \nof Social Action,\netc.\nUnited Nations \n(WHO, etc.)\nHospitals, \nReference\nrehabilitation centre\nProfessional \nassociations\nService provider groups\nTraining institutes\nCommunity- \nbased Services\nFederation\nand national\n  associations\nHospital, \nHealth \ncare centres Network: actors that can be mobilised for physical  \nand functional rehabilitation\nInternational\nNational\nLocal\nInstitutional donors\nFacilitation organisations* * \x07Organisations (IOs, NGOs, etc.), agencies, universities and research centres that facilitate the existence of physical \nand functional rehabilitation via national or international projects.\nInternational \n     consortia (IDDC, etc.)\n                   International\n                    networks \n          (CBR, WCPT, \n     WFOT, ISPO,\nFATO, etc.)\nLevels of intervention © Handicap International, 2013\n \n \n49\n\xa0Intervention.\n\xa0modalities\u200a.\nThe Unit has technical resources specifically \npositioned to be able to reach the maximum',
    'training programmes for rehabilitation professionals',
    'risks of yo-yo dieting and heart disease',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.4789
cosine_accuracy@3 0.4789
cosine_accuracy@5 0.4789
cosine_accuracy@10 0.5219
cosine_precision@1 0.4789
cosine_precision@3 0.4789
cosine_precision@5 0.4789
cosine_precision@10 0.4395
cosine_recall@1 0.0601
cosine_recall@3 0.1802
cosine_recall@5 0.3003
cosine_recall@10 0.5134
cosine_ndcg@10 0.499
cosine_mrr@10 0.4861
cosine_map@100 0.5681

Information Retrieval

Metric Value
cosine_accuracy@1 0.4742
cosine_accuracy@3 0.4742
cosine_accuracy@5 0.4742
cosine_accuracy@10 0.5141
cosine_precision@1 0.4742
cosine_precision@3 0.4742
cosine_precision@5 0.4742
cosine_precision@10 0.4362
cosine_recall@1 0.059
cosine_recall@3 0.1769
cosine_recall@5 0.2948
cosine_recall@10 0.5078
cosine_ndcg@10 0.4934
cosine_mrr@10 0.4808
cosine_map@100 0.5632

Information Retrieval

Metric Value
cosine_accuracy@1 0.4555
cosine_accuracy@3 0.4555
cosine_accuracy@5 0.4555
cosine_accuracy@10 0.4969
cosine_precision@1 0.4555
cosine_precision@3 0.4555
cosine_precision@5 0.4555
cosine_precision@10 0.4188
cosine_recall@1 0.057
cosine_recall@3 0.1711
cosine_recall@5 0.2851
cosine_recall@10 0.4882
cosine_ndcg@10 0.4746
cosine_mrr@10 0.4624
cosine_map@100 0.5446

Information Retrieval

Metric Value
cosine_accuracy@1 0.4352
cosine_accuracy@3 0.4352
cosine_accuracy@5 0.4352
cosine_accuracy@10 0.4727
cosine_precision@1 0.4352
cosine_precision@3 0.4352
cosine_precision@5 0.4352
cosine_precision@10 0.3988
cosine_recall@1 0.0545
cosine_recall@3 0.1636
cosine_recall@5 0.2726
cosine_recall@10 0.4639
cosine_ndcg@10 0.4522
cosine_mrr@10 0.4414
cosine_map@100 0.5208

Information Retrieval

Metric Value
cosine_accuracy@1 0.3945
cosine_accuracy@3 0.3945
cosine_accuracy@5 0.3945
cosine_accuracy@10 0.4297
cosine_precision@1 0.3945
cosine_precision@3 0.3945
cosine_precision@5 0.3945
cosine_precision@10 0.3598
cosine_recall@1 0.0499
cosine_recall@3 0.1497
cosine_recall@5 0.2495
cosine_recall@10 0.4224
cosine_ndcg@10 0.4109
cosine_mrr@10 0.4004
cosine_map@100 0.4763

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 11,518 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 7 tokens
    • mean: 239.56 tokens
    • max: 410 tokens
    • min: 5 tokens
    • mean: 10.8 tokens
    • max: 26 tokens
  • Samples:
    positive anchor
    values and preferences among older people
    in relation to exercise, noting that older
    people valued the outcomes of exercise
    for maintaining health. They judged that
    the evidence for older people was likely to be relevant to all adults and agreed
    there was likely to be some uncertainty or
    variability with respect to people’s values and
    preferences for exercise and its outcomes.
    Some GDG members suggested that given reasonably consistent benefit and very
    little harms, there would be no important
    uncertainty or variability regarding people’s
    values on the outcomes of exercise. In the
    absence of direct qualitative evidence, the GDG judged from their own experience
    that resource requirements for structured
    exercise programmes would vary by country
    and setting, but in some settings might
    be associated with moderate costs (for
    structured exercise programmes, compared with self-managed physical activity). The GDG
    noted that costs could also vary according to
    the modality of ...
    exercise preferences and outcomes variability among adults
    ICRC, ICRC Hospital Design and Rehabilitation Guidelines, Vol. 1: Models Of Care, ICRC, Geneva, 2022: https://shop. icrc.org/icrc-hospital-design-and-rehabilitation-guidelines-volume-1-models-of-care-print-en.html ICRC rehabilitation guidelines 2022
    fitness training is guided by a health worker or (if feasible) performed self-directed by
    the patient following education and advice.
    Metacognitive
    training
    Metacognitive training aims to improve social functioning through reducing cognitive biases/psychotic symptoms (e.g. delusion, impaired self-awareness or insight).
    Metacognitive training is usually provided as a structured group intervention during which participants perform exercises to reflect their own thinking and receive training in
    strategies to cope with cognitive biases during daily routines. Metacognitive training is
    guided by a health worker.
    Mindfulness-
    based approaches Mindfulness-based interventions aim to achieve a state of mindfulness in which a
    person becomes more aware of their physical, mental, and emotional condition in the
    present moment, without becoming judgemental. Mindfulness-based interventions (e.g. mindfulness-based cognitive therapy, acceptance and commitment therapy)
    help people to pay attentio...
    structured group interventions for metacognitive training
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.4444 10 64.4729 - - - - -
0.8889 20 32.1029 - - - - -
1.0 23 - 0.4734 0.4741 0.4590 0.4271 0.3722
1.3111 30 23.9454 - - - - -
1.7556 40 19.7319 - - - - -
2.0 46 - 0.4934 0.4926 0.4723 0.4471 0.4021
2.1778 50 17.6381 - - - - -
2.6222 60 16.9329 - - - - -
3.0 69 - 0.498 0.4954 0.4746 0.4528 0.4089
3.0444 70 15.4096 - - - - -
3.4889 80 15.4012 - - - - -
3.8444 88 - 0.4990 0.4934 0.4746 0.4522 0.4109
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 4.0.2
  • Transformers: 4.51.1
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
7
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kokojake/modernbert-embed-base-fitness-health-matryoshka

Finetuned
(39)
this model
Finetunes
1 model

Evaluation results