pq_cache_2 / README.md
Bharatdeep-H's picture
Updated Weights
53cdae8 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:3560
  - loss:MultipleNegativesRankingLoss
  - loss:CosineSimilarityLoss
base_model: jinaai/jina-embedding-b-en-v1
widget:
  - source_sentence: show my best performing investments over the past 5 years
    sentences:
      - show my holdings that have performed well over the past 5 years
      - |
        Show me the geographic distribution of my investments
      - Can you break down my exposure to X?
  - source_sentence: How to address the red flags I have?
    sentences:
      - Which of my stocks are most volatile?
      - Do I hold any equity mutual funds in my portfolio?
      - How to deal with my red flags?
  - source_sentence: Mere funds ki situation kya hai?
    sentences:
      - Do I hold any equity mutual funds in my portfolio?
      - Mere funds kese chal rahe hai?
      - Show my riskiest mutual funds
  - source_sentence: Is my selection of mutual funds effective for the current market?
    sentences:
      - What are my riskiest holdings
      - Is my mutual fund selection good for the current market?
      - What is the expected return of my portfolio?
  - source_sentence: What is the performance of my portfolio?
    sentences:
      - Sell recommendations, do I have any ?
      - How am I doing compared to other investors
      - How is my portfolio performing
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on jinaai/jina-embedding-b-en-v1
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: test eval
          type: test-eval
        metrics:
          - type: cosine_accuracy@1
            value: 0.8735955056179775
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9915730337078652
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9971910112359551
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.8735955056179775
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.33052434456928836
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.199438202247191
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.8735955056179775
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9915730337078652
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9971910112359551
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9477733494874759
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9297752808988763
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9297752808988764
            name: Cosine Map@100

SentenceTransformer based on jinaai/jina-embedding-b-en-v1

This is a sentence-transformers model finetuned from jinaai/jina-embedding-b-en-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jinaai/jina-embedding-b-en-v1
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'What is the performance of my portfolio?',
    'How is my portfolio performing',
    'Sell recommendations, do I have any ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8736
cosine_accuracy@3 0.9916
cosine_accuracy@5 0.9972
cosine_accuracy@10 1.0
cosine_precision@1 0.8736
cosine_precision@3 0.3305
cosine_precision@5 0.1994
cosine_precision@10 0.1
cosine_recall@1 0.8736
cosine_recall@3 0.9916
cosine_recall@5 0.9972
cosine_recall@10 1.0
cosine_ndcg@10 0.9478
cosine_mrr@10 0.9298
cosine_map@100 0.9298

Training Details

Training Datasets

Unnamed Dataset

  • Size: 1,780 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 4 tokens
    • mean: 11.28 tokens
    • max: 26 tokens
    • min: 4 tokens
    • mean: 9.96 tokens
    • max: 33 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    List my recommendations for investments Show me my recommendations 1.0
    Can you provide python code to locate the best funds in my portfolio? Give me python code to find best funds in my portfolio 1.0
    what's the equity vs debt ratio for my investments? what's the equity to debt ratio of my portfolio? 1.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Unnamed Dataset

  • Size: 1,780 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 4 tokens
    • mean: 11.26 tokens
    • max: 26 tokens
    • min: 4 tokens
    • mean: 10.0 tokens
    • max: 33 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Can you tell me if I own equity funds? Do I hold any equity funds? 1.0
    what are some strategies to lower my portfolio risk? are there any ways to lower risk in my portfolio ? 1.0
    What are the risks associated with my portfolio? Details on my portfolio risk 1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss test-eval_cosine_ndcg@10
1.0 112 - 0.9084
2.0 224 - 0.9168
3.0 336 - 0.9292
4.0 448 - 0.9358
4.4643 500 0.1968 0.9343
5.0 560 - 0.9389
6.0 672 - 0.9413
7.0 784 - 0.9437
8.0 896 - 0.9456
8.9286 1000 0.1307 0.9478

Framework Versions

  • Python: 3.12.5
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0
  • Accelerate: 1.5.2
  • Datasets: 3.4.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}