SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("DiamondCutter88/astrology-ft-c8cd64c7-86a1-4203-ac38-826b510e436f")
# Run inference
sentences = [
    'According to the context, why is it unhelpful to follow popular media’s definition of a great girlfriend or rigid dating tips?',
    'There is not one universal definition of a good relationship or lover. Ladies, it’s a waste of time to try and fit the mold of the popular media’s definition of a great girlfriend because that definition may not attract the man that satisfies you. Men, it’s a waste of time to follow rigid dating tips that may not appeal to the woman of your dreams.\nSo how do know what attracts or appeals to the lover of your dreams? More importantly, what attracts or appeals to you? What type of man/woman is the lover of your dreams? What do you need in a lover, in a relationship, in a spouse? You may think you know the answer to these questions. However, popular media may be influencing you to seek qualities that are not your natural preferences.\nThat ultimately results in failed relationships.',
    'Interpreting Planets in the Signs\nLook at the birth chart and take note of what sign and house each planet is in. (Check here if you need help remembering what the symbols/glyphs mean.) Here is a handy, printable worksheet that will help you keep track of your birth chart information. Also take note of the Ascendant\'s sign (Rising Sign), and which house the Ascendant\'s ruler occupies.\nNow you can look up the "Chart Interpretations" of each of your planets and Rising Sign. The Chart Interpretations on the linked page are only general meanings of what each item represents. As you get a feel for the basic energy of each planet, and of the qualities of each sign, you can add your own insight.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9167
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9167
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9167
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9638
cosine_mrr@10 0.9514
cosine_map@100 0.9514

Training Details

Training Dataset

Unnamed Dataset

  • Size: 288 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 288 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 10 tokens
    • mean: 19.75 tokens
    • max: 37 tokens
    • min: 10 tokens
    • mean: 147.1 tokens
    • max: 263 tokens
  • Samples:
    sentence_0 sentence_1
    How does Sagittarius perceive life's possibilities according to the context? Sagittarius has a continually changing view of life's possibilities; Pisces adapts itself superficially to its environment (like a chameleon) and reflects it like a mirror. Sometimes they compromise so much that they sacrifice their own interests. The four Mutable signs are at the final month of each season.
    What characteristic behavior is attributed to Pisces in adapting to its environment? Sagittarius has a continually changing view of life's possibilities; Pisces adapts itself superficially to its environment (like a chameleon) and reflects it like a mirror. Sometimes they compromise so much that they sacrifice their own interests. The four Mutable signs are at the final month of each season.
    How long can a moving planet stay in aspect during a transit? A moving planet can stay in aspect anywhere from a few hours up to a few years. The moon, because she moves so fast, makes aspects only for a short hour or two. And then she’s gone. Neptune and Pluto, because they move so slowly (relative motion), can stay in aspect for 2 or 3 years. Use an orb of 1 degree to consider transits (but use a 2 degree orb to plan ahead!) While a moving planet is in aspect to a natal planetary position, the moving planet is said to be “transiting” the natal planet.
    The transit has an influence over you during the entire time that it’s transiting, but it also has a point at which it becomes “exact.” This is the day when it reaches the exact degree, minute, and second of your birth planet. Some transits can become exact three times because of retrograde motion: the transiting planet passes over the point once, then the planet goes retrograde and backtracks over the point, then the planet goes back into direct motion and passes over the point a third time.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 29 0.9484
1.7241 50 0.9283
2.0 58 0.9424
3.0 87 0.9283
3.4483 100 0.9484
4.0 116 0.9437
5.0 145 0.9301
5.1724 150 0.9385
6.0 174 0.9539
6.8966 200 0.9484
7.0 203 0.9484
8.0 232 0.9638
8.6207 250 0.9638
9.0 261 0.9638
10.0 290 0.9638

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
334M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DiamondCutter88/astrology-ft-c8cd64c7-86a1-4203-ac38-826b510e436f

Finetuned
(167)
this model

Evaluation results