SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (transformer): Transformer(
    (auto_model): XLMRobertaLoRA(
      (roberta): XLMRobertaModel(
        (embeddings): XLMRobertaEmbeddings(
          (word_embeddings): ParametrizedEmbedding(
            250002, 1024, padding_idx=1
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (token_type_embeddings): ParametrizedEmbedding(
            1, 1024
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
        )
        (emb_drop): Dropout(p=0.1, inplace=False)
        (emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        (encoder): XLMRobertaEncoder(
          (layers): ModuleList(
            (0-23): 24 x Block(
              (mixer): MHA(
                (rotary_emb): RotaryEmbedding()
                (Wqkv): ParametrizedLinearResidual(
                  in_features=1024, out_features=3072, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (inner_attn): FlashSelfAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (inner_cross_attn): FlashCrossAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (out_proj): ParametrizedLinear(
                  in_features=1024, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout1): Dropout(p=0.1, inplace=False)
              (drop_path1): StochasticDepth(p=0.0, mode=row)
              (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
              (mlp): Mlp(
                (fc1): ParametrizedLinear(
                  in_features=1024, out_features=4096, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (fc2): ParametrizedLinear(
                  in_features=4096, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout2): Dropout(p=0.1, inplace=False)
              (drop_path2): StochasticDepth(p=0.0, mode=row)
              (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
            )
          )
        )
        (pooler): XLMRobertaPooler(
          (dense): ParametrizedLinear(
            in_features=1024, out_features=1024, bias=True
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (activation): Tanh()
        )
      )
    )
  )
  (pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (normalizer): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Jrinky/final_stage1")
# Run inference
sentences = [
    'What items are present in the described setting along with the policy on pets',
    'There is also a grandfather clock and two oriental lions on grey marbled pedestals. Pets are allowed (Charges may be applicable)',
    'Investigators stated that Philoumenos appeared to have been trying to protect his face with his hands when a blow to his face or head severed one finger on each hand. Raby escaped the scene of the crime undetected. Raby was subsequently found to have acted alone, "without any connection to a religious or political entity." An investigation launched by the Israeli police initially failed to identify the killer. Raby was arrested on 17 November 1982 as he again attempted enter the Monastery at Jacob\'s Well illicitly by climbing over a wall; he was carrying hand grenades. Raby supplied the police with accurate details of his earlier, previously unsolved, crimes. These were the murder of Philoumenos; a March 1979 murder of a Jewish gynecologist in Tel-Aviv; the murder of the family of a woman in Lod, Israel in April 1979 who claimed to have clairvoyant powers; and an assault on a nun at the Jacob\'s Well holy site in April 1982. The nun was seriously wounded in the attack. Both she and the gynecologist were attacked by axe, according to prosecutors. Raby, a newly religious Jew, was described as unwashed, dressed in worn-out clothing, and audibly muttered passages of scripture in a strange manner. Psychiatric evaluations found that he was mentally incompetent to stand trial; he was committed to a mental hospital; details of his subsequent whereabouts are restricted by privacy regulations. At a court hearing after his arrest, an Israeli prosecutor told the court that Raby was convinced that the monastery was the site of the ancient Jewish Temple, and that he made an attempt on the life of the nun "in response to a divine command." Erroneous accounts\nInitial accounts depicted the murder as an anti-Christian hate attack carried out by a group of Jewish settlers, the result being what Maariv described as "a wave of hatred" in Greece. Reports indicating that "radical Jews" had tortured Philoumenos and "cut off the fingers of his hand" before killing him had appeared in Greek newspapers. Maariv also quoted an official in the Greek Orthodox Patriarchate in Jerusalem asserting that "the murder was carried out by radical religious Jews" claiming that "the Well does not belong to Christians but to Jews". In a 2017 article in the journal Israel Studies, researchers David Gurevich and Yisca Harani found that false accounts blaming the slaying on "settlers" and "Zionist extremists" persisted even after the arrest of the assailant and his confinement in a mental institution, and that there were "patterns of ritual murder accusation in the popular narrative." The same theme was echoed in parts of the Eastern Orthodox community and by some secular sources, including Blackwell\'s Dictionary of Eastern Christianity, the Encyclopedia of the Israeli-Palestinian Conflict,  The Spectator and Times Literary Supplement, as well as Wikipedia. Gurevich and Harani contended that a 1989 account of the murder, published in Orthodox America, a publication of the Russian Orthodox Church Outside Russia, became the basis of an anti-Semitic ritual murder narrative, according to which a group of anti-Christianity Jews first harassed Philoumenos and destroyed Christian holy objects at the monastery, then murdered him. Veneration\nIn 2009 the Greek Orthodox Patriarchate of Jerusalem recognised him as a holy martyr of the Eastern Orthodox Church, thirty years after his "martyrdom". The "careful" wording of the pronouncement of the Jerusalem Patriarchate that canonized Philoumenos makes no mention of murderer\'s faith or ethnicity; he is described as a "vile man" a "heterodox fanatic visitor" and, inaccurately, as an individual who "with an axe, opened a deep cut across his forehead, cut off the fingers of his right hand, and upon escaping threw a grenade which ended the Father\'s life."',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 199,321 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 7 tokens
    • mean: 17.09 tokens
    • max: 46 tokens
    • min: 8 tokens
    • mean: 109.45 tokens
    • max: 1835 tokens
  • Samples:
    anchor positive
    Where is Nagpada located Nagpada is a neighbourhood in South Mumbai.
    What types of players are associated with Folkestone F.C., Midland Football League, and English Football League players
    Folkestone F.C. players
    Midland Football League players
    English Football League players
    What is Anthony Elujoba known for in the field of Pharmacognosy Anthony Elujoba (born 1948) is a Nigerian professor of Pharmacognosy, fondly referred to as the "village chemist" because of his involvement in research into medicinal plants. He was acting vice chancellor of Obafemi Awolowo University, Nigeria.
  • Loss: cachedselfloss.CachedInfonce with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 4,068 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 17.26 tokens
    • max: 40 tokens
    • min: 6 tokens
    • mean: 108.28 tokens
    • max: 2233 tokens
  • Samples:
    anchor positive
    What metaphor is being used to describe collaboration in the text segment Like two oxen in a field, tied shoulder to shoulder. With Jesus doing all of the heavy lifting.
    What titles did McGurk win while playing as a schoolboy and a student He won consecutive MacRory Cup titles lining out as a schoolboy with St Patrick's College, Maghera before winning a Sigerson Cup title as a student at Queen's University Belfast. McGurk progressed onto the Lavey senior teams in both codes and was corner-forward on the team that won the All-Ireland SCFC title in 1991.
    What are the borders of the Trinity-Bellwoods neighborhood in Toronto Trinity-Bellwoods is an inner city neighbourhood in Toronto, Ontario, Canada. It is bounded on the east by Bathurst Street, on the north by College Street, on the south by Queen Street West, and by Dovercourt Road on the west.
  • Loss: cachedselfloss.CachedInfonce with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 2000
  • per_device_eval_batch_size: 2000
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 2000
  • per_device_eval_batch_size: 2000
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1 10 0.3939 0.4079
0.2 20 0.4225 0.3920
0.3 30 0.4067 0.3819
0.4 40 0.3918 0.3760
0.5 50 0.4631 0.3719
0.6 60 0.3806 0.3686
0.7 70 0.3971 0.3663
0.8 80 0.3788 0.3655
0.9 90 0.3852 0.3649
1.0 100 0.3881 0.3648

Framework Versions

  • Python: 3.11.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.4.0+cu121
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedInfonce

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
7
Safetensors
Model size
572M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support