SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (transformer): Transformer(
    (auto_model): XLMRobertaLoRA(
      (roberta): XLMRobertaModel(
        (embeddings): XLMRobertaEmbeddings(
          (word_embeddings): ParametrizedEmbedding(
            250002, 1024, padding_idx=1
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (token_type_embeddings): ParametrizedEmbedding(
            1, 1024
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
        )
        (emb_drop): Dropout(p=0.1, inplace=False)
        (emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        (encoder): XLMRobertaEncoder(
          (layers): ModuleList(
            (0-23): 24 x Block(
              (mixer): MHA(
                (rotary_emb): RotaryEmbedding()
                (Wqkv): ParametrizedLinearResidual(
                  in_features=1024, out_features=3072, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (inner_attn): FlashSelfAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (inner_cross_attn): FlashCrossAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (out_proj): ParametrizedLinear(
                  in_features=1024, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout1): Dropout(p=0.1, inplace=False)
              (drop_path1): StochasticDepth(p=0.0, mode=row)
              (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
              (mlp): Mlp(
                (fc1): ParametrizedLinear(
                  in_features=1024, out_features=4096, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (fc2): ParametrizedLinear(
                  in_features=4096, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout2): Dropout(p=0.1, inplace=False)
              (drop_path2): StochasticDepth(p=0.0, mode=row)
              (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
            )
          )
        )
        (pooler): XLMRobertaPooler(
          (dense): ParametrizedLinear(
            in_features=1024, out_features=1024, bias=True
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (activation): Tanh()
        )
      )
    )
  )
  (pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (normalizer): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("Jrinky/jina3")
# Run inference
sentences = [
    'What challenges does Mayor Hundred face in leading New York as depicted in March to War',
    'His heroics convinced the citizens of New York to elect him mayor, and March to War opens with Mayor Hundred dealing with unrest in the city. Political cartoons display him as a caped superhero unable to handle the daily needs of the city, and a protest against the war in Iraq has it divided.',
    "Bring your sewing machine - scissors - material - pattern (if you already have one that you want to work on) - have a project that you need assistance with - just want to spend the day with your fellow Caerthen's in a day of sewing and socializing? Come on out!",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 584,355 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 17.41 tokens
    • max: 42 tokens
    • min: 5 tokens
    • mean: 119.19 tokens
    • max: 1979 tokens
  • Samples:
    anchor positive
    What resources and tools are recommended for busy brides planning their weddings If you are planning on spending a little bit of time on your wedding planning, here is part 2 of my series of great resources and tools for wedding planning that every busy bride should know about. The previous instalment can be viewed here.
    How many girls were raised in the house described This house is where my parents proudly hung up our diplomas. This house is where 3 girls were raised.
    Where did the narrator's dad always barbecue for Easter This house is where my dad always barbequed for Easter, rain or shine. This house is where we welcomed family and friends on their first visit to the United States.
  • Loss: cachedselfloss2.CachedInfonce with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 18,073 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 17.52 tokens
    • max: 40 tokens
    • min: 5 tokens
    • mean: 107.58 tokens
    • max: 1832 tokens
  • Samples:
    anchor positive
    What significant role did the character Raven portray in early cinema He played cynical tough guys in modern films, but then branched into westerns where for the most part he was the gallant hero. In fact the ultimate gallant white knight hero in Shane. His part as Raven is a difficult one, yet he pulls it off. He's a cold blooded contract killer, one of the earliest ever portrayed as a film protagonist. Yet he's human and you see flashes of it, his concern for cats. As a cat lover, I can sure identify with that. Raven is also one of the earliest characters in cinema who talks about child abuse making him what he is. Groundbreaking when you think about it. Next to Ladd, the biggest kudos have to go to Laird Cregar, borrowed from 20th Century Fox to play Willard Gates. Gates is a top company executive with Marshall's firm which is a defense contractor which is why the Senate is interested in him. He's basically a jerk who thinks he's so clever. Veronica Lake gets to him real easy because of his weakness for the nightclub scene.
    What are the key features and characteristics of the Majestic Pure Dead Sea Mud Mask At the same time, it can be used all over your body, not just face. This way, you can clear any part of your skin from its impurities. An additional feature that caught our eyes immediately was the beautifully designed packaging that makes this affordable product look high-end. The combination of grey, black, and blue colors will easily make it stand out in any beauty shop. - Good for sensitive and dry skin
    - Affordable price
    - Treats many different skin conditions
    - Not good for oily skin
    - Can feel a bit oily
    Majestic Pure Dead Sea Mud Mask Review
    Majestic Pure is a well-known brand among the beauty and skincare community. They make affordable, natural products and are oftentimes among the celebrity favorites.
    What benefits does this product provide for the skin This will provide you with softer skin that glows. At the same time, it will help you deal with your clogged pores and provide you the necessary daily acne treatment.
  • Loss: cachedselfloss2.CachedInfonce with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 800
  • per_device_eval_batch_size: 800
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 800
  • per_device_eval_batch_size: 800
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.2052 150 13.7695 17.4625
0.4104 300 14.2067 17.4452
0.6156 450 14.344 17.4289
0.8208 600 13.705 17.3620
1.0260 750 13.0304 17.1246
1.2312 900 13.28 16.7495
1.4364 1050 13.0314 16.5068
1.6416 1200 13.0861 16.3113
1.8468 1350 13.2752 16.1406
2.0520 1500 12.2868 16.0122
2.2572 1650 12.9551 15.9320
2.4624 1800 12.8339 15.8444
2.6676 1950 12.0719 15.8108
2.8728 2100 12.7803 15.7694
3.0780 2250 11.9023 15.7460
3.2832 2400 12.6882 15.7291
3.4884 2550 12.3062 15.7165
3.6936 2700 12.402 15.7071
3.8988 2850 12.0136 15.7014
4.1040 3000 12.821 15.6873
4.3092 3150 12.4667 15.6835
4.5144 3300 12.6469 15.6740
4.7196 3450 12.1751 15.6519
4.9248 3600 12.3627 15.6637

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.3.1+cu121
  • Accelerate: 1.5.2
  • Datasets: 3.4.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedInfonce

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
7
Safetensors
Model size
572M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support