ModernBERT Embed base fitness health Matryoshka

This is a sentence-transformers model finetuned from kokojake/modernbert-embed-base-fitness-health-matryoshka-8-epochs on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("kokojake/modernbert-embed-base-fitness-health-matryoshka-8-epochs-25k")
# Run inference
sentences = [
    'Low back pain is \nthe leading cause of \ndisability globally across \nall ages and in both \nsexes, representing 8% \nof all YLDs in 2020 (10).',
    'prevalence of low back pain by age and sex',
    'BMI calculation in postpartum studies',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.54
cosine_accuracy@3 0.5435
cosine_accuracy@5 0.5573
cosine_accuracy@10 0.6335
cosine_precision@1 0.54
cosine_precision@3 0.5399
cosine_precision@5 0.5347
cosine_precision@10 0.4633
cosine_recall@1 0.0375
cosine_recall@3 0.1121
cosine_recall@5 0.1837
cosine_recall@10 0.307
cosine_ndcg@10 0.4894
cosine_mrr@10 0.5542
cosine_map@100 0.3353

Information Retrieval

Metric Value
cosine_accuracy@1 0.5262
cosine_accuracy@3 0.5292
cosine_accuracy@5 0.5418
cosine_accuracy@10 0.6248
cosine_precision@1 0.5262
cosine_precision@3 0.5259
cosine_precision@5 0.52
cosine_precision@10 0.4515
cosine_recall@1 0.0366
cosine_recall@3 0.1097
cosine_recall@5 0.1793
cosine_recall@10 0.3001
cosine_ndcg@10 0.4769
cosine_mrr@10 0.5405
cosine_map@100 0.33

Information Retrieval

Metric Value
cosine_accuracy@1 0.5249
cosine_accuracy@3 0.5283
cosine_accuracy@5 0.5383
cosine_accuracy@10 0.6248
cosine_precision@1 0.5249
cosine_precision@3 0.5247
cosine_precision@5 0.5186
cosine_precision@10 0.452
cosine_recall@1 0.0365
cosine_recall@3 0.1093
cosine_recall@5 0.1786
cosine_recall@10 0.3003
cosine_ndcg@10 0.4768
cosine_mrr@10 0.5393
cosine_map@100 0.3291

Information Retrieval

Metric Value
cosine_accuracy@1 0.4963
cosine_accuracy@3 0.4989
cosine_accuracy@5 0.5054
cosine_accuracy@10 0.579
cosine_precision@1 0.4963
cosine_precision@3 0.4959
cosine_precision@5 0.4886
cosine_precision@10 0.4225
cosine_recall@1 0.0344
cosine_recall@3 0.103
cosine_recall@5 0.1678
cosine_recall@10 0.2805
cosine_ndcg@10 0.4474
cosine_mrr@10 0.5081
cosine_map@100 0.3124

Information Retrieval

Metric Value
cosine_accuracy@1 0.4215
cosine_accuracy@3 0.4236
cosine_accuracy@5 0.4349
cosine_accuracy@10 0.5145
cosine_precision@1 0.4215
cosine_precision@3 0.4209
cosine_precision@5 0.4162
cosine_precision@10 0.3711
cosine_recall@1 0.0291
cosine_recall@3 0.0871
cosine_recall@5 0.1422
cosine_recall@10 0.2458
cosine_ndcg@10 0.3886
cosine_mrr@10 0.4353
cosine_map@100 0.2758

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 20,792 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 11 tokens
    • mean: 229.18 tokens
    • max: 412 tokens
    • min: 5 tokens
    • mean: 11.11 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    A total of 5,697 postmenopausal women were included in the
    meta-analysis. The mean age of participants was ranged from 51 to ~89 yrs., and the mean BMI was ranged from 21 to 34 kg.m2.
    Sample size of individual studies was ranged from 14 to 320
    participants. To increase the generalizability of our meta-analysis results, postmenopausal women regardless of their health status,
    comprised a wide range of health (absence of disease) and chronic
    disease characteristics (metabolic diseases, cardiovascular diseases, cancer, and osteoporosis) were included. Full details of participant
    characteristics are summarized in Supplementary Table 1.
    Intervention characteristics
    Exercise training characteristics are summarized in Supplementary Table 1. All included studies compared the effects of exercise training
    versus a control group using random allocation. Intervention durations
    of included studies was ranged from 4 weeks to 18 months, while frequency of exercise sessions was ranged from 1 to 7 per w...
    effects of exercise training on postmenopausal women with chronic diseases
    inform care planning, including the need for a referral or follow-up.
    Assessment
    of nutritional
    status
    Nutritional status describes the state of the body in relation to the consumption and utilization of nutrients, and can be classified as well-nourished or malnourished (under-
    or over-nourished). The assessment of nutritional status uses anthropometric measures to assess body composition (measurement of weight, height, body mass index, body
    circumferences and skinfold thickness), laboratory tests to assess biochemical parameters, clinical assessment of comorbid conditions, and interviewing to assess dietary practices.
    Assessment aims to ascertain the impact of the nutritional status on health and functioning,
    and inform care planning, including the need for referral or follow-up. Assessment of
    oedema
    Oedema (e.g. peripheral or lymphoedema) describes an abnormal fluid volume in the circulatory system or in the interstitial space. The assessment of oedema (including
    initial scr...
    nutritional status impact on health and functioning
    required
    • Patient sitting or lying
    (if under anaesthesia)
    • Tubular bandage
    (if needed)
    Ø 10 cm
    • Padding bandage
    1 roll
    • POP 3 rolls of 15 cm
    • Elastic bandage
    2 rolls of 15 cm,
    1 roll of 10 cm
    • Adhesive tape
    2.5 cm
    • Triangular
    bandage Edge
    A. Senet/ICRC
    Table 3.7: Long arm slabs at a glance
    Method of application
    Refer to the general procedure for slabs (p. 45) for the first steps. Mark the proximal and distal landmarks.
    P. Ley/ICRC
    SLABS
    77
    Prepare six to eight layers of POP bandages of the required
    length. Place the wet slab on the limb and mould it.
    Secure the slab with elastic bandages.
    P. Ley/ICRC
    P. Ley/ICRC
    P. Ley/ICRC
    78 PLASTER OF PARIS AND OTHER FRACTURE IMMOBILIZATION METHODS
    When the POP slab is bent around the elbow, pay special
    attention to avoid wrinkles, which can cause pain. Make sure the ulnar nerve is not compressed by asking
    the patient if the inside of their elbow is comfortable.
    After bandaging, maintain the elbow and wrist in the proper
    posi...
    long arm slab application procedure
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.2462 10 13.2182 - - - - -
0.4923 20 12.361 - - - - -
0.7385 30 10.9108 - - - - -
0.9846 40 10.1159 0.4810 0.4740 0.4704 0.4424 0.3798
1.2462 50 9.145 - - - - -
1.4923 60 7.7837 - - - - -
1.7385 70 7.6298 - - - - -
1.9846 80 7.9102 0.4889 0.4786 0.4790 0.4453 0.3867
2.2462 90 7.5969 - - - - -
2.4923 100 6.8696 - - - - -
2.7385 110 7.2096 - - - - -
2.9846 120 7.2675 0.4894 0.4769 0.4768 0.4474 0.3886
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.0.2
  • Transformers: 4.51.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kokojake/modernbert-embed-base-fitness-health-matryoshka-8-epochs-25k

Evaluation results