metadata
base_model: BAAI/bge-large-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2940
- loss:MultipleNegativesSymmetricRankingLoss
widget:
- source_sentence: >-
Enlarge a shape, with a centre of enlargement given, by a positive scale
factor bigger than 1, where the centre of enlargement lies on the edge or
outside of the object The triangle is enlarged by scale factor 3, with the
centre of enlargement at (1,0). What are the new coordinates of the point
marked T ? ![A coordinate grid with the x-axis going from -1 to 10 and the
y-axis going from -1 to 7. 3 points are plotted and joined with straight
lines to form a triangle. The points are (1,1), (1,4) and (3,1). Point
(3,1) is labelled as T. Point (1,0) is also plotted.]() (9,3)
sentences:
- Confuses powers and multiples
- Enlarges by the wrong centre of enlargement
- >-
When asked for factors of an algebraic expression, thinks any part of a
term will be a factor
- source_sentence: >-
Identify a right-angled triangle from a description of the properties A
triangle has the following angles: 90^, 45^, 45^ Statement 1. It must be a
right angled triangle Statement 2. It must be an isosceles triangle Which
is true? Statement 1
sentences:
- >-
When solving a problem using written division (bus-stop method), does
the calculation from right to left
- >-
Thinks finding a fraction of an amount means subtracting from that
amount
- Believes isosceles triangles cannot have right angles
- source_sentence: Convert from kilometers to miles 1 km≈ 0.6 miles 4 km≈□ miles 0.24
sentences:
- Believes multiplying two negatives gives a negative answer
- Believes two lines of the same length are parallel
- >-
When multiplying decimals, ignores place value and just multiplies the
digits
- source_sentence: >-
Identify the order of rotational symmetry of a shape Which shape has
rotational symmetry order 4 ? ![Trapezium]()
sentences:
- >-
Believes the whole and remainder are the other way when changing an
improper fraction to a mixed number
- Does not know how to find order of rotational symmetry
- Fails to reflect across mirror line
- source_sentence: >-
Identify whether two shapes are similar or not Tom and Katie are
discussing similarity. Who is correct? Tom says these two rectangles are
similar ![Two rectangles of different sizes. One rectangle has width 2cm
and height 3cm. The other rectangle has width 4cm and height 9cm. ]()
Katie says these two rectangles are similar ![Two rectangles of different
sizes. One rectangle has width 4cm and height 6cm. The other rectangle has
width 7cm and height 9cm. ]() Only Katie
sentences:
- >-
Does not recognise when one part of a fraction is the negative of the
other
- >-
When solving simultaneous equations, thinks they can't multiply each
equation by a different number
- Thinks adding the same value to each side makes shapes similar
SentenceTransformer based on BAAI/bge-large-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-large-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- csv
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-large-eedi-2024")
# Run inference
sentences = [
'Identify whether two shapes are similar or not Tom and Katie are discussing similarity. Who is correct? Tom says these two rectangles are similar ![Two rectangles of different sizes. One rectangle has width 2cm and height 3cm. The other rectangle has width 4cm and height 9cm. ]() Katie says these two rectangles are similar ![Two rectangles of different sizes. One rectangle has width 4cm and height 6cm. The other rectangle has width 7cm and height 9cm. ]() Only Katie',
'Thinks adding the same value to each side makes shapes similar',
"When solving simultaneous equations, thinks they can't multiply each equation by a different number",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
csv
- Dataset: csv
- Size: 2,940 training samples
- Columns:
sentence1andsentence2 - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 type string string details - min: 13 tokens
- mean: 56.03 tokens
- max: 249 tokens
- min: 6 tokens
- mean: 15.19 tokens
- max: 39 tokens
- Samples:
- Loss:
MultipleNegativesSymmetricRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 20fp16: Trueload_best_model_at_end: Truebatch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 20max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.25 | 23 | 1.0714 |
| 0.5 | 46 | 0.9487 |
| 0.75 | 69 | 0.8001 |
| 1.0 | 92 | 0.7443 |
| 1.25 | 115 | 0.3951 |
| 1.5 | 138 | 0.3903 |
| 1.75 | 161 | 0.3867 |
| 2.0 | 184 | 0.3386 |
| 2.25 | 207 | 0.2206 |
| 2.5 | 230 | 0.2051 |
| 2.75 | 253 | 0.2098 |
| 3.0 | 276 | 0.1989 |
| 3.25 | 299 | 0.1486 |
| 3.5 | 322 | 0.1463 |
| 3.75 | 345 | 0.1453 |
| 4.0 | 368 | 0.1237 |
| 4.25 | 391 | 0.0956 |
| 4.5 | 414 | 0.0939 |
| 4.75 | 437 | 0.1115 |
| 5.0 | 460 | 0.0925 |
| 5.25 | 483 | 0.0778 |
| 5.5 | 506 | 0.0744 |
| 5.75 | 529 | 0.09 |
| 6.0 | 552 | 0.0782 |
| 6.25 | 575 | 0.0454 |
| 6.5 | 598 | 0.0697 |
| 6.75 | 621 | 0.059 |
| 7.0 | 644 | 0.033 |
| 7.25 | 667 | 0.0309 |
| 7.5 | 690 | 0.0548 |
| 7.75 | 713 | 0.0605 |
| 8.0 | 736 | 0.0431 |
| 8.25 | 759 | 0.0224 |
| 8.5 | 782 | 0.0381 |
| 8.75 | 805 | 0.0451 |
| 9.0 | 828 | 0.0169 |
| 9.25 | 851 | 0.0228 |
| 9.5 | 874 | 0.0257 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.0
- Transformers: 4.44.0
- PyTorch: 2.4.0
- Accelerate: 0.33.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}