SentenceTransformer based on BAAI/bge-large-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-large-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- csv
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-large-eedi-2024")
# Run inference
sentences = [
'Construct: Solve quadratic equations using the quadratic formula where the coefficient of x² is not 1.\n\nQuestion: Vera wants to solve this equation using the quadratic formula.\n(\n3 h^2-10 h+4=0\n)\n\nWhat should replace the circle? (? pm square root of (?-?) / bigcirc).\n\nOptions:\nA. 3\nB. 5\nC. 9\nD. 6\n\nCorrect Answer: 6\n\nIncorrect Answer: 3',
'Misremembers the quadratic formula',
'Does not know that vertically opposite angles are equal',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
csv
- Dataset: csv
- Size: 2,442 training samples
- Columns:
qa_pair_text
andMisconceptionName
- Approximate statistics based on the first 1000 samples:
qa_pair_text MisconceptionName type string string details - min: 40 tokens
- mean: 102.66 tokens
- max: 512 tokens
- min: 4 tokens
- mean: 15.26 tokens
- max: 39 tokens
- Samples:
qa_pair_text MisconceptionName Construct: Convert between cm³ and mm³.
Question: 1 cm^3 is the same as _______ mm^3.
Options:
A. 10
B. 100
C. 1000
D. 10000
Correct Answer: 1000
Incorrect Answer: 10Does not cube the conversion factor when converting cubed units
Construct: Write algebraic expressions with correct algebraic convention.
Question: Which answer shows the following calculation using the correct algebraic convention?
(
y x x+b x 3
).
Options:
A. y x+b 3
B. x y+3 b
C. y+3 b x
D. 3 b x y
Correct Answer: x y+3 b
Incorrect Answer: 3 b x yMultiplies all terms together when simplifying an expression
Construct: Write algebraic expressions with correct algebraic convention.
Question: Which of the following is the correct way of writing: p divided by q , then add 3 using algebraic convention?
Options:
A. p q+3
B. (p / q)+3
C. (p / q+3)
D. p-q+3
Correct Answer: (p / q)+3
Incorrect Answer: p-q+3Has used a subtraction sign to represent division
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
csv
- Dataset: csv
- Size: 1,928 evaluation samples
- Columns:
qa_pair_text
andMisconceptionName
- Approximate statistics based on the first 1000 samples:
qa_pair_text MisconceptionName type string string details - min: 40 tokens
- mean: 103.34 tokens
- max: 512 tokens
- min: 4 tokens
- mean: 14.34 tokens
- max: 40 tokens
- Samples:
qa_pair_text MisconceptionName Construct: Multiply two decimals together with the same number of decimal places.
Question: 0.4^2=.
Options:
A. 0.08
B. 0.8
C. 1.6
D. 0.16
Correct Answer: 0.16
Incorrect Answer: 0.8Mixes up squaring and multiplying by 2 or doubling
Construct: Calculate the cube root of a number.
Question: 3rd root of (8)=.
Options:
A. 2 . dot{6}
B. 4
C. 64
D. 2
Correct Answer: 2
Incorrect Answer: 4Halves when asked to find the cube root
Construct: Calculate missing lengths of shapes by geometrical inference, where the lengths given are in the same units.
Question: What is the area of the shaded section of this composite shape made from rectangles? A composite shape made from two rectangles that form an "L" shape. The base of the shape is horizontal and is 13cm long. The vertical height of the whole shape is 14cm. The horizontal width of the top part of the shape is 6cm. The vertical height of the top rectangle is 8cm. The right handed rectangle is shaded blue.
Options:
A. 48 cm^2
B. 104 cm^2
C. 42 cm^2
D. 56 cm^2
Correct Answer: 42 cm^2
Incorrect Answer: 48 cm^2Uses an incorrect side length when splitting a composite shape into parts
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 32weight_decay
: 0.01num_train_epochs
: 20lr_scheduler_type
: cosine_with_restartswarmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Truegradient_checkpointing
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 32eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 20max_steps
: -1lr_scheduler_type
: cosine_with_restartslr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Truegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
0.4183 | 2 | 1.2854 | - |
0.6275 | 3 | - | 1.0368 |
0.8366 | 4 | 1.0855 | - |
1.2549 | 6 | 0.7559 | 0.8548 |
1.6732 | 8 | 0.7032 | - |
1.8824 | 9 | - | 0.6840 |
2.0915 | 10 | 0.474 | - |
2.5098 | 12 | 0.3959 | 0.6023 |
2.9281 | 14 | 0.3279 | - |
3.1373 | 15 | - | 0.5576 |
3.3464 | 16 | 0.2164 | - |
3.7647 | 18 | 0.1991 | 0.4972 |
4.1830 | 20 | 0.1378 | - |
4.3922 | 21 | - | 0.5081 |
4.6013 | 22 | 0.1168 | - |
5.0196 | 24 | 0.0955 | 0.5000 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.19.2
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 1
Model tree for Gurveer05/bge-large-eedi-2024
Base model
BAAI/bge-large-en-v1.5