SentenceTransformer based on BAAI/bge-large-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-large-eedi-2024")
# Run inference
sentences = [
    'Construct:  Solve quadratic equations using the quadratic formula where the coefficient of x² is not 1.\n\nQuestion:  Vera wants to solve this equation using the quadratic formula.\n(\n3 h^2-10 h+4=0\n)\n\nWhat should replace the circle?  (? pm square root of (?-?) / bigcirc).\n\nOptions:\nA. 3\nB. 5\nC. 9\nD. 6\n\nCorrect Answer: 6\n\nIncorrect Answer: 3',
    'Misremembers the quadratic formula',
    'Does not know that vertically opposite angles are equal',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 2,442 training samples
  • Columns: qa_pair_text and MisconceptionName
  • Approximate statistics based on the first 1000 samples:
    qa_pair_text MisconceptionName
    type string string
    details
    • min: 40 tokens
    • mean: 102.66 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 15.26 tokens
    • max: 39 tokens
  • Samples:
    qa_pair_text MisconceptionName
    Construct: Convert between cm³ and mm³.

    Question: 1 cm^3 is the same as _______ mm^3.

    Options:
    A. 10
    B. 100
    C. 1000
    D. 10000

    Correct Answer: 1000

    Incorrect Answer: 10
    Does not cube the conversion factor when converting cubed units
    Construct: Write algebraic expressions with correct algebraic convention.

    Question: Which answer shows the following calculation using the correct algebraic convention?
    (
    y x x+b x 3
    ).

    Options:
    A. y x+b 3
    B. x y+3 b
    C. y+3 b x
    D. 3 b x y

    Correct Answer: x y+3 b

    Incorrect Answer: 3 b x y
    Multiplies all terms together when simplifying an expression
    Construct: Write algebraic expressions with correct algebraic convention.

    Question: Which of the following is the correct way of writing: p divided by q , then add 3 using algebraic convention?

    Options:
    A. p q+3
    B. (p / q)+3
    C. (p / q+3)
    D. p-q+3

    Correct Answer: (p / q)+3

    Incorrect Answer: p-q+3
    Has used a subtraction sign to represent division
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

csv

  • Dataset: csv
  • Size: 1,928 evaluation samples
  • Columns: qa_pair_text and MisconceptionName
  • Approximate statistics based on the first 1000 samples:
    qa_pair_text MisconceptionName
    type string string
    details
    • min: 40 tokens
    • mean: 103.34 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 14.34 tokens
    • max: 40 tokens
  • Samples:
    qa_pair_text MisconceptionName
    Construct: Multiply two decimals together with the same number of decimal places.

    Question: 0.4^2=.

    Options:
    A. 0.08
    B. 0.8
    C. 1.6
    D. 0.16

    Correct Answer: 0.16

    Incorrect Answer: 0.8
    Mixes up squaring and multiplying by 2 or doubling
    Construct: Calculate the cube root of a number.

    Question: 3rd root of (8)=.

    Options:
    A. 2 . dot{6}
    B. 4
    C. 64
    D. 2

    Correct Answer: 2

    Incorrect Answer: 4
    Halves when asked to find the cube root
    Construct: Calculate missing lengths of shapes by geometrical inference, where the lengths given are in the same units.

    Question: What is the area of the shaded section of this composite shape made from rectangles? A composite shape made from two rectangles that form an "L" shape. The base of the shape is horizontal and is 13cm long. The vertical height of the whole shape is 14cm. The horizontal width of the top part of the shape is 6cm. The vertical height of the top rectangle is 8cm. The right handed rectangle is shaded blue.

    Options:
    A. 48 cm^2
    B. 104 cm^2
    C. 42 cm^2
    D. 56 cm^2

    Correct Answer: 42 cm^2

    Incorrect Answer: 48 cm^2
    Uses an incorrect side length when splitting a composite shape into parts
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 32
  • weight_decay: 0.01
  • num_train_epochs: 20
  • lr_scheduler_type: cosine_with_restarts
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
0.4183 2 1.2854 -
0.6275 3 - 1.0368
0.8366 4 1.0855 -
1.2549 6 0.7559 0.8548
1.6732 8 0.7032 -
1.8824 9 - 0.6840
2.0915 10 0.474 -
2.5098 12 0.3959 0.6023
2.9281 14 0.3279 -
3.1373 15 - 0.5576
3.3464 16 0.2164 -
3.7647 18 0.1991 0.4972
4.1830 20 0.1378 -
4.3922 21 - 0.5081
4.6013 22 0.1168 -
5.0196 24 0.0955 0.5000
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gurveer05/bge-large-eedi-2024

Finetuned
(43)
this model