CrossEncoder based on BAAI/bge-reranker-base
This is a Cross Encoder model finetuned from BAAI/bge-reranker-base using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: BAAI/bge-reranker-base
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the ๐ค Hub
model = CrossEncoder("foochun/bge-reranker-ft")
# Get scores for pairs of texts
pairs = [
['quinn toh heng yi', 'heng yi toh quinn'],
['mohd iskandi bin hassan', 'muhd iskandi hassan'],
['quinn ng ee siu', 'quinn ee siu ng'],
['malini doraisamy', 'malini doraisamy'],
['see shan fui', 'shanfui see'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'quinn toh heng yi',
[
'heng yi toh quinn',
'muhd iskandi hassan',
'quinn ee siu ng',
'malini doraisamy',
'shanfui see',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Training Details
Training Dataset
Unnamed Dataset
- Size: 82,744 training samples
- Columns:
query
,pos
, andneg
- Approximate statistics based on the first 1000 samples:
query pos neg type string string string details - min: 9 characters
- mean: 19.16 characters
- max: 42 characters
- min: 9 characters
- mean: 17.11 characters
- max: 37 characters
- min: 9 characters
- mean: 17.7 characters
- max: 38 characters
- Samples:
query pos neg brandon teh min jun
jun teh min
brandon min teh jun
suling anak peroi
suling anak peroi
suling anak rahim
chin sze tian
szetian chin
chin sze tian wong
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid" }
Evaluation Dataset
Unnamed Dataset
- Size: 11,820 evaluation samples
- Columns:
query
,pos
, andneg
- Approximate statistics based on the first 1000 samples:
query pos neg type string string string details - min: 10 characters
- mean: 19.08 characters
- max: 45 characters
- min: 9 characters
- mean: 17.02 characters
- max: 40 characters
- min: 9 characters
- mean: 17.58 characters
- max: 44 characters
- Samples:
query pos neg quinn toh heng yi
heng yi toh quinn
toh yi heng
mohd iskandi bin hassan
muhd iskandi hassan
puteri balqis binti megat sulaiman
quinn ng ee siu
quinn ee siu ng
quinn ee ng siu
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 64per_device_eval_batch_size
: 64learning_rate
: 1e-05warmup_ratio
: 0.1seed
: 12fp16
: Truedataloader_num_workers
: 4load_best_model_at_end
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 1e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 4dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0008 | 1 | 0.4707 |
0.7734 | 1000 | 0.1114 |
1.5468 | 2000 | 0.0051 |
2.3202 | 3000 | 0.0046 |
Framework Versions
- Python: 3.11.9
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for foochun/bge-reranker-ft
Base model
BAAI/bge-reranker-base