SentenceTransformer

This is a sentence-transformers model trained on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("pankajrajdeo/bond-embed-v1-fp16")
# Run inference
sentences = [
    'Light-chain amyloidosis',
    'amyloidosis primary systemic',
    'partial deletion of the long arm of chromosome X',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6303
cosine_accuracy@3 0.8148
cosine_accuracy@5 0.8775
cosine_accuracy@10 0.9268
cosine_precision@1 0.6303
cosine_precision@3 0.2763
cosine_precision@5 0.1798
cosine_precision@10 0.0957
cosine_recall@1 0.6217
cosine_recall@3 0.8081
cosine_recall@5 0.8724
cosine_recall@10 0.9241
cosine_ndcg@10 0.7797
cosine_mrr@10 0.7342
cosine_map@100 0.7341

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 1,441,905 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 9.48 tokens
    • max: 47 tokens
    • min: 3 tokens
    • mean: 8.68 tokens
    • max: 30 tokens
  • Samples:
    anchor positive
    Mangshan horned toad Mangshan spadefoot toad
    Leuconotopicos borealis Picoides borealis
    Cylindrella teneriensis Teneria teneriensis
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 1024
  • learning_rate: 1.5e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.05
  • bf16: True
  • dataloader_num_workers: 32
  • load_best_model_at_end: True
  • gradient_checkpointing: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 1024
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1.5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 32
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss owl_ontology_eval_cosine_ndcg@10
0.0717 100 1.3232 -
0.1434 200 1.021 -
0.2151 300 0.9633 -
0.2867 400 0.9068 -
0.3297 460 - 0.7207
0.3584 500 0.8723 -
0.4301 600 0.852 -
0.5018 700 0.8161 -
0.5735 800 0.7939 -
0.6452 900 0.7935 -
0.6595 920 - 0.7364
0.7168 1000 0.7646 -
0.7885 1100 0.7464 -
0.8602 1200 0.7376 -
0.9319 1300 0.7313 -
0.9892 1380 - 0.7468
1.0036 1400 0.7099 -
1.0753 1500 0.6884 -
1.1470 1600 0.6776 -
1.2186 1700 0.6694 -
1.2903 1800 0.6641 -
1.3190 1840 - 0.7561
1.3620 1900 0.6526 -
1.4337 2000 0.6524 -
1.5054 2100 0.6364 -
1.5771 2200 0.6339 -
1.6487 2300 0.626 0.7614
1.7204 2400 0.6197 -
1.7921 2500 0.6193 -
1.8638 2600 0.6155 -
1.9355 2700 0.6142 -
1.9785 2760 - 0.7662
2.0072 2800 0.5853 -
2.0789 2900 0.5824 -
2.1505 3000 0.5769 -
2.2222 3100 0.5765 -
2.2939 3200 0.5608 -
2.3082 3220 - 0.7698
2.3656 3300 0.5695 -
2.4373 3400 0.5641 -
2.5090 3500 0.5638 -
2.5806 3600 0.554 -
2.6380 3680 - 0.7735
2.6523 3700 0.5539 -
2.7240 3800 0.5495 -
2.7957 3900 0.5556 -
2.8674 4000 0.5397 -
2.9391 4100 0.5447 -
2.9677 4140 - 0.7757
3.0108 4200 0.5331 -
3.0824 4300 0.5336 -
3.1541 4400 0.5346 -
3.2258 4500 0.5247 -
3.2975 4600 0.5241 0.7775
3.3692 4700 0.5257 -
3.4409 4800 0.5241 -
3.5125 4900 0.5171 -
3.5842 5000 0.5215 -
3.6272 5060 - 0.7787
3.6559 5100 0.5203 -
3.7276 5200 0.5214 -
3.7993 5300 0.5266 -
3.8710 5400 0.5127 -
3.9427 5500 0.5062 -
3.9570 5520 - 0.7790
4.0143 5600 0.5104 -
4.0860 5700 0.5155 -
4.1577 5800 0.5042 -
4.2294 5900 0.5174 -
4.2867 5980 - 0.7797
4.3011 6000 0.509 -
4.3728 6100 0.5106 -
4.4444 6200 0.5076 -
4.5161 6300 0.5046 -
4.5878 6400 0.5077 -
4.6165 6440 - 0.7795
4.6595 6500 0.5114 -
4.7312 6600 0.5103 -
4.8029 6700 0.5106 -
4.8746 6800 0.5102 -
4.9462 6900 0.5076 0.7797

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.53.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedMultipleNegativesRankingLoss

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
24
Safetensors
Model size
41.5M params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Evaluation results