bge-large-eedi-2024 / README.md
Gurveer05's picture
MAP@25: 0.3104430178362365
caf33fd verified
metadata
base_model: BAAI/bge-large-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:2442
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: |-
      Construct:  Identify the order of rotational symmetry of a shape.

      Question:  Which shape has the lowest order of rotational symmetry?

      Options:
      A. Equilateral triangle
      B. Circle
      C. Square
      D. Trapezium

      Correct Answer: Trapezium

      Incorrect Answer: Circle
    sentences:
      - >-
        When multiplying decimals, divides by the wrong power of 10 when
        reinserting the decimal
      - Does not understand inequality notation
      - Does not know how to find order of rotational symmetry
  - source_sentence: >-
      Construct:  Solve three or more step linear equations, with the variable
      on one side, involving negative integers.


      Question:  Tom and Katie are discussing how to solve:

      (5 x / 3)-1=-2


      Tom says a correct next line of working could be:  5 x-3=-6 


      Katie says a correct next line of working could be:  (5 x / 3)=-3 


      Who is correct?


      Options:

      A. Only Tom

      B. Only Katie

      C. Both Tom and Katie

      D. Neither is correct


      Correct Answer: Only Tom


      Incorrect Answer: Neither is correct
    sentences:
      - Mixes up the value of two terms when substituting
      - Thinks there are 100 ml in a litre
      - >-
        Does not understand that when multiplying both sides of an equation by
        an amount every term must be multiplied by the same amount
  - source_sentence: >-
      Construct:  Identify corresponding angles.


      Question:  M  and  N  are the intersections of the line  X Y  with the
      lines  P Q  and  R S .

      Which angle is corresponding to angle QMY? A pair of parallel lines
      pointing up to the left. PQ and RS are the ends of the parallel lines. PQ
      is on the left of the diagram with Q being the top left.

      A red straight line, XY, crosses the parallel lines. X is on the left of
      the diagram.

      Line XY crosses line PQ at a point marked M.

      Line XY crosses line RS at a point marked N.

      The angle QMY is marked in red.


      Options:

      A. XMP

      B. SNY

      C. SNX

      D. XNR


      Correct Answer: SNY


      Incorrect Answer: XNR
    sentences:
      - Confuses corresponding and alternate angles
      - Estimates shares of a ratio instead of calculating
      - Misremembers the quadratic formula
  - source_sentence: |-
      Construct:  Factorise a quadratic expression in the form x² - c.

      Question:  Factorise this expression, if possible:
      (
      p^2-4
      ).

      Options:
      A. (p-2)(p+2)
      B. p(p-2)
      C. (p-2)(p-2)
      D. Does not
      factorise

      Correct Answer: (p-2)(p+2)

      Incorrect Answer: p(p-2)
    sentences:
      - Mixes up greater than and less than symbols
      - Does not know how to find the length of a line segment from coordinates
      - Does not recognise difference of two squares
  - source_sentence: >-
      Construct:  Solve quadratic equations using the quadratic formula where
      the coefficient of x² is not 1.


      Question:  Vera wants to solve this equation using the quadratic formula.

      (

      3 h^2-10 h+4=0

      )


      What should replace the circle?  (? pm square root of (?-?) / bigcirc).


      Options:

      A. 3

      B. 5

      C. 9

      D. 6


      Correct Answer: 6


      Incorrect Answer: 3
    sentences:
      - Misremembers the quadratic formula
      - When asked for a specific term in a sequence gives the term after
      - Does not know that vertically opposite angles are equal

SentenceTransformer based on BAAI/bge-large-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-large-eedi-2024")
# Run inference
sentences = [
    'Construct:  Solve quadratic equations using the quadratic formula where the coefficient of x² is not 1.\n\nQuestion:  Vera wants to solve this equation using the quadratic formula.\n(\n3 h^2-10 h+4=0\n)\n\nWhat should replace the circle?  (? pm square root of (?-?) / bigcirc).\n\nOptions:\nA. 3\nB. 5\nC. 9\nD. 6\n\nCorrect Answer: 6\n\nIncorrect Answer: 3',
    'Misremembers the quadratic formula',
    'Does not know that vertically opposite angles are equal',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 2,442 training samples
  • Columns: qa_pair_text and MisconceptionName
  • Approximate statistics based on the first 1000 samples:
    qa_pair_text MisconceptionName
    type string string
    details
    • min: 40 tokens
    • mean: 102.66 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 15.26 tokens
    • max: 39 tokens
  • Samples:
    qa_pair_text MisconceptionName
    Construct: Convert between cm³ and mm³.

    Question: 1 cm^3 is the same as _______ mm^3.

    Options:
    A. 10
    B. 100
    C. 1000
    D. 10000

    Correct Answer: 1000

    Incorrect Answer: 10
    Does not cube the conversion factor when converting cubed units
    Construct: Write algebraic expressions with correct algebraic convention.

    Question: Which answer shows the following calculation using the correct algebraic convention?
    (
    y x x+b x 3
    ).

    Options:
    A. y x+b 3
    B. x y+3 b
    C. y+3 b x
    D. 3 b x y

    Correct Answer: x y+3 b

    Incorrect Answer: 3 b x y
    Multiplies all terms together when simplifying an expression
    Construct: Write algebraic expressions with correct algebraic convention.

    Question: Which of the following is the correct way of writing: p divided by q , then add 3 using algebraic convention?

    Options:
    A. p q+3
    B. (p / q)+3
    C. (p / q+3)
    D. p-q+3

    Correct Answer: (p / q)+3

    Incorrect Answer: p-q+3
    Has used a subtraction sign to represent division
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

csv

  • Dataset: csv
  • Size: 1,928 evaluation samples
  • Columns: qa_pair_text and MisconceptionName
  • Approximate statistics based on the first 1000 samples:
    qa_pair_text MisconceptionName
    type string string
    details
    • min: 40 tokens
    • mean: 103.34 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 14.34 tokens
    • max: 40 tokens
  • Samples:
    qa_pair_text MisconceptionName
    Construct: Multiply two decimals together with the same number of decimal places.

    Question: 0.4^2=.

    Options:
    A. 0.08
    B. 0.8
    C. 1.6
    D. 0.16

    Correct Answer: 0.16

    Incorrect Answer: 0.8
    Mixes up squaring and multiplying by 2 or doubling
    Construct: Calculate the cube root of a number.

    Question: 3rd root of (8)=.

    Options:
    A. 2 . dot{6}
    B. 4
    C. 64
    D. 2

    Correct Answer: 2

    Incorrect Answer: 4
    Halves when asked to find the cube root
    Construct: Calculate missing lengths of shapes by geometrical inference, where the lengths given are in the same units.

    Question: What is the area of the shaded section of this composite shape made from rectangles? A composite shape made from two rectangles that form an "L" shape. The base of the shape is horizontal and is 13cm long. The vertical height of the whole shape is 14cm. The horizontal width of the top part of the shape is 6cm. The vertical height of the top rectangle is 8cm. The right handed rectangle is shaded blue.

    Options:
    A. 48 cm^2
    B. 104 cm^2
    C. 42 cm^2
    D. 56 cm^2

    Correct Answer: 42 cm^2

    Incorrect Answer: 48 cm^2
    Uses an incorrect side length when splitting a composite shape into parts
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 32
  • weight_decay: 0.01
  • num_train_epochs: 20
  • lr_scheduler_type: cosine_with_restarts
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
0.4183 2 1.2854 -
0.6275 3 - 1.0368
0.8366 4 1.0855 -
1.2549 6 0.7559 0.8548
1.6732 8 0.7032 -
1.8824 9 - 0.6840
2.0915 10 0.474 -
2.5098 12 0.3959 0.6023
2.9281 14 0.3279 -
3.1373 15 - 0.5576
3.3464 16 0.2164 -
3.7647 18 0.1991 0.4972
4.1830 20 0.1378 -
4.3922 21 - 0.5081
4.6013 22 0.1168 -
5.0196 24 0.0955 0.5000
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}