bge-large-eedi-2024 / README.md
Gurveer05's picture
Add new SentenceTransformer model.
e137031 verified
|
raw
history blame
16.7 kB
metadata
base_model: BAAI/bge-large-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:2940
  - loss:MultipleNegativesSymmetricRankingLoss
widget:
  - source_sentence: >-
      Enlarge a shape, with a centre of enlargement given, by a positive scale
      factor bigger than 1, where the centre of enlargement lies on the edge or
      outside of the object The triangle is enlarged by scale factor 3, with the
      centre of enlargement at (1,0). What are the new coordinates of the point
      marked T ? ![A coordinate grid with the x-axis going from -1 to 10 and the
      y-axis going from -1 to 7. 3 points are plotted and joined with straight
      lines to form a triangle. The points are (1,1), (1,4) and (3,1). Point
      (3,1) is labelled as T. Point (1,0) is also plotted.]() (9,3)
    sentences:
      - Confuses powers and multiples
      - Enlarges by the wrong centre of enlargement
      - >-
        When asked for factors of an algebraic expression, thinks any part of a
        term will be a factor
  - source_sentence: >-
      Identify a right-angled triangle from a description of the properties A
      triangle has the following angles: 90^, 45^, 45^ Statement 1. It must be a
      right angled triangle Statement 2. It must be an isosceles triangle Which
      is true? Statement 1
    sentences:
      - >-
        When solving a problem using written division (bus-stop method), does
        the calculation from right to left
      - >-
        Thinks finding a fraction of an amount means subtracting from that
        amount
      - Believes isosceles triangles cannot have right angles
  - source_sentence: Convert from kilometers to miles 1 km≈ 0.6 miles 4 km≈□ miles 0.24
    sentences:
      - Believes multiplying two negatives gives a negative answer
      - Believes two lines of the same length are parallel
      - >-
        When multiplying decimals, ignores place value and just multiplies the
        digits
  - source_sentence: >-
      Identify the order of rotational symmetry of a shape Which shape has
      rotational symmetry order 4 ? ![Trapezium]()
    sentences:
      - >-
        Believes the whole and remainder are the other way when changing an
        improper fraction to a mixed number
      - Does not know how to find order of rotational symmetry
      - Fails to reflect across mirror line
  - source_sentence: >-
      Identify whether two shapes are similar or not Tom and Katie are
      discussing similarity. Who is correct? Tom says these two rectangles are
      similar ![Two rectangles of different sizes. One rectangle has width 2cm
      and height 3cm. The other rectangle has width 4cm and height 9cm. ]()
      Katie says these two rectangles are similar ![Two rectangles of different
      sizes. One rectangle has width 4cm and height 6cm. The other rectangle has
      width 7cm and height 9cm. ]() Only Katie
    sentences:
      - >-
        Does not recognise when one part of a fraction is the negative of the
        other
      - >-
        When solving simultaneous equations, thinks they can't multiply each
        equation by a different number
      - Thinks adding the same value to each side makes shapes similar

SentenceTransformer based on BAAI/bge-large-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-large-eedi-2024")
# Run inference
sentences = [
    'Identify whether two shapes are similar or not Tom and Katie are discussing similarity. Who is correct? Tom says these two rectangles are similar ![Two rectangles of different sizes. One rectangle has width 2cm and height 3cm. The other rectangle has width 4cm and height 9cm. ]() Katie says these two rectangles are similar ![Two rectangles of different sizes. One rectangle has width 4cm and height 6cm. The other rectangle has width 7cm and height 9cm. ]() Only Katie',
    'Thinks adding the same value to each side makes shapes similar',
    "When solving simultaneous equations, thinks they can't multiply each equation by a different number",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 2,940 training samples
  • Columns: sentence1 and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2
    type string string
    details
    • min: 13 tokens
    • mean: 56.03 tokens
    • max: 249 tokens
    • min: 6 tokens
    • mean: 15.19 tokens
    • max: 39 tokens
  • Samples:
    sentence1 sentence2
    Read a fraction on a scale where the required number is marked by a dash between two numbers What fraction is the arrow pointing to? An image of a numberline with 5 dashes. On the leftmost dash is the number 1/6. On the rightmost dash is the number 3/6. An arrow points to the 4th dash from the left 3/4 When reading a dash on a number line does not take into account the number at the start or the width of each division
    Substitute positive non-integer values into expressions involving powers or roots Jo and Paul are discussing quadratic equations. Jo says there is no value of x that can make (1-x)^2 negative. Paul says there is no value of x that can make 1-x^2 positive. Who is correct? Both Jo and Paul Assumes a fact without considering enough examples
    Recognise and use efficient methods for mental multiplication Tom and Katie are discussing mental multiplication strategies. Tom says 15 × 42=154 × 2 Katie says 15 × 42=(15 × 4)+(15 × 2) Who is correct? Only Tom Does not correctly apply the commutative property of multiplication
  • Loss: MultipleNegativesSymmetricRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 20
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.25 23 1.0714
0.5 46 0.9487
0.75 69 0.8001
1.0 92 0.7443
1.25 115 0.3951
1.5 138 0.3903
1.75 161 0.3867
2.0 184 0.3386
2.25 207 0.2206
2.5 230 0.2051
2.75 253 0.2098
3.0 276 0.1989
3.25 299 0.1486
3.5 322 0.1463
3.75 345 0.1453
4.0 368 0.1237
4.25 391 0.0956
4.5 414 0.0939
4.75 437 0.1115
5.0 460 0.0925
5.25 483 0.0778
5.5 506 0.0744
5.75 529 0.09
6.0 552 0.0782
6.25 575 0.0454
6.5 598 0.0697
6.75 621 0.059
7.0 644 0.033
7.25 667 0.0309
7.5 690 0.0548
7.75 713 0.0605
8.0 736 0.0431
8.25 759 0.0224
8.5 782 0.0381
8.75 805 0.0451
9.0 828 0.0169
9.25 851 0.0228
9.5 874 0.0257
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.0
  • Accelerate: 0.33.0
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}