PyLate model based on answerdotai/ModernBERT-base

This is a PyLate model finetuned from answerdotai/ModernBERT-base. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.

Model Details

Model Description

  • Model Type: PyLate model
  • Base model: answerdotai/ModernBERT-base
  • Document Length: 180 tokens
  • Query Length: 32 tokens
  • Output Dimensionality: 128 tokens
  • Similarity Function: MaxSim

Model Sources

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

First install the PyLate library:

pip install -U pylate

Retrieval

PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.

Indexing documents

First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:

from pylate import indexes, models, retrieve

# Step 1: Load the ColBERT model
model = models.ColBERT(
    model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000,
)

# Step 2: Initialize the Voyager index
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
    override=True,  # This overwrites the existing index if any
)

# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]

documents_embeddings = model.encode(
    documents,
    batch_size=32,
    is_query=False,  # Ensure that it is set to False to indicate that these are documents, not queries
    show_progress_bar=True,
)

# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
    documents_ids=documents_ids,
    documents_embeddings=documents_embeddings,
)

Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:

# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
)

Retrieving top-k documents for queries

Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:

# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)

# Step 2: Encode the queries
queries_embeddings = model.encode(
    ["query for document 3", "query for document 1"],
    batch_size=32,
    is_query=True,  #  # Ensure that it is set to False to indicate that these are queries
    show_progress_bar=True,
)

# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
    queries_embeddings=queries_embeddings,
    k=10,  # Retrieve the top 10 matches for each query
)

Reranking

If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:

from pylate import rank, models

queries = [
    "query A",
    "query B",
]

documents = [
    ["document A", "document B"],
    ["document 1", "document C", "document B"],
]

documents_ids = [
    [1, 2],
    [1, 3, 2],
]

model = models.ColBERT(
    model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000,
)

queries_embeddings = model.encode(
    queries,
    is_query=True,
)

documents_embeddings = model.encode(
    documents,
    is_query=False,
)

reranked_documents = rank.rerank(
    documents_ids=documents_ids,
    queries_embeddings=queries_embeddings,
    documents_embeddings=documents_embeddings,
)

Evaluation

Metrics

Col BERTTriplet

  • Evaluated with pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator
Metric Value
accuracy 0.5022

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,383,917 training samples
  • Columns: question, answer, and negative
  • Approximate statistics based on the first 1000 samples:
    question answer negative
    type string string string
    details
    • min: 9 tokens
    • mean: 13.3 tokens
    • max: 21 tokens
    • min: 16 tokens
    • mean: 31.77 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.54 tokens
    • max: 32 tokens
  • Samples:
    question answer negative
    are mandarins same as clementines? Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella. A: CUTIES® are actually two varieties of mandarins: Clementine mandarins, available November through January; and W. Murcott mandarins, available February through April. ... Unlike other mandarins or oranges, they are seedless, super sweet, easy to peel and kid-sized—only a select few achieve CUTIES® ' high standards.
    are mandarins same as clementines? Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella. Most of all, there's AJ, the infant son of Clementine's ally Rebecca, who Clementine promised to raise when Rebecca died back in Season Two. The Final Season rejoins Clementine and AJ, now around six years old, on the open road.
    are mandarins same as clementines? Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella. Clementines — commonly known by the brand names Cuties or Halos — are a hybrid of mandarin and sweet oranges. These tiny fruits are bright orange, easy to peel, sweeter than most other citrus fruits, and typically seedless.
  • Loss: pylate.losses.contrastive.Contrastive

Evaluation Dataset

Unnamed Dataset

  • Size: 5,000 evaluation samples
  • Columns: question, answer, and negative_1
  • Approximate statistics based on the first 1000 samples:
    question answer negative_1
    type string string string
    details
    • min: 9 tokens
    • mean: 13.02 tokens
    • max: 25 tokens
    • min: 16 tokens
    • mean: 31.66 tokens
    • max: 32 tokens
    • min: 15 tokens
    • mean: 31.41 tokens
    • max: 32 tokens
  • Samples:
    question answer negative_1
    what is the best shampoo for thin curly hair? ['Best For Daily Cleansing: Mizani True Textures Cream Cleansing Conditioner. ... ', 'Best For Coils: Ouidad VitalCurl Clear & Gentle Shampoo. ... ', 'Best For Restoring Shine: Shea Moisture Coconut & Hibiscus Curl & Shine Shampoo. ... ', 'Best For Fine Curls: Renee Furterer Sublime Curl Curl Activating Shampoo.'] Whether you have straight or curly hair, thin or thick, this is another option that you should not miss for the best OGX shampoo. The Australian tea tree oils in this shampoo are effective for repair of oily, damaged, and frizzy hair. ... It also makes a great choice of shampoo for people who have dry scalp.
    how many days after my period do i start ovulating? Many women typically ovulate around 12 to 14 days after the first day of their last period, but some have a naturally short cycle. They may ovulate as soon as six days or so after the first day of their last period. If you have a short cycle, for example, 21 days, and you bleed for 7 days, then you could ovulate right after your period. This is because ovulation generally occurs 12-16 days before your next period begins, and this would estimate you ovulating at days 6-10 of your cycle.
    are the apes in planet of the apes cgi? Unlike in the original 1968 film, there are no monkey suits, heavy makeup jobs or wigs. All of the apes audiences see on-screen are motion-capture CGI apes, which lends them a more realistic effect as the CGI is based on the actors' actual movements. Among the living primates, humans are most closely related to the apes, which include the lesser apes (gibbons) and the great apes (chimpanzees, gorillas and orangutans).
  • Loss: pylate.losses.contrastive.Contrastive

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 180
  • per_device_eval_batch_size: 180
  • learning_rate: 3e-06
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • dataloader_num_workers: 12
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 180
  • per_device_eval_batch_size: 180
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 12
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss accuracy
0 0 - - 0.4560
0.0002 1 22.6729 - -
0.0307 200 16.3893 - -
0.0614 400 7.1556 - -
0.0921 600 4.4451 - -
0.1228 800 1.8384 - -
0.1535 1000 1.0792 - -
0.1842 1200 0.8636 - -
0.2149 1400 0.7355 - -
0.2455 1600 0.6498 - -
0.2762 1800 0.5801 - -
0.3069 2000 0.5318 - -
0.3376 2200 0.49 - -
0.3683 2400 0.4515 - -
0.3990 2600 0.4245 - -
0.4297 2800 0.3929 - -
0.4604 3000 0.3704 - -
0.4911 3200 0.3505 - -
0.5218 3400 0.3294 - -
0.5525 3600 0.3114 - -
0.5832 3800 0.297 - -
0.6139 4000 0.281 - -
0.6446 4200 0.2723 - -
0.6753 4400 0.2589 - -
0.7060 4600 0.2518 - -
0.7366 4800 0.2437 - -
0.7673 5000 0.2333 - -
0.7980 5200 0.2285 - -
0.8287 5400 0.2236 - -
0.8594 5600 0.2144 - -
0.8901 5800 0.2122 - -
0.9208 6000 0.2093 - -
0.9515 6200 0.2015 - -
0.9822 6400 0.1984 - -
1.0129 6600 0.1936 - -
1.0436 6800 0.1885 - -
1.0743 7000 0.1841 - -
1.1050 7200 0.1818 - -
1.1357 7400 0.1805 - -
1.1664 7600 0.1774 - -
1.1971 7800 0.1742 - -
1.2277 8000 0.1722 - -
1.2584 8200 0.1679 - -
1.2891 8400 0.1671 - -
1.3198 8600 0.1646 - -
1.3505 8800 0.1639 - -
1.3812 9000 0.161 - -
1.4119 9200 0.1604 - -
1.4426 9400 0.1585 - -
1.4733 9600 0.1562 - -
1.5040 9800 0.1548 - -
1.5347 10000 0.1528 - -
1.5654 10200 0.1519 - -
1.5961 10400 0.1492 - -
1.6268 10600 0.149 - -
1.6575 10800 0.1481 - -
1.6882 11000 0.1473 - -
1.7188 11200 0.1467 - -
1.7495 11400 0.1448 - -
1.7802 11600 0.1413 - -
1.8109 11800 0.142 - -
1.8416 12000 0.1398 - -
1.8723 12200 0.1385 - -
1.9030 12400 0.1398 - -
1.9337 12600 0.1375 - -
1.9644 12800 0.1376 - -
1.9951 13000 0.1369 - -
2.0258 13200 0.1303 - -
2.0565 13400 0.1305 - -
2.0872 13600 0.1286 - -
2.1179 13800 0.1266 - -
2.1486 14000 0.1273 - -
2.1793 14200 0.1269 - -
2.2099 14400 0.1253 - -
2.2406 14600 0.1263 - -
2.2713 14800 0.1249 - -
2.3020 15000 0.1248 - -
2.3327 15200 0.1227 - -
2.3634 15400 0.1239 - -
2.3941 15600 0.1233 - -
2.4248 15800 0.1211 - -
2.4555 16000 0.1208 - -
2.4862 16200 0.1206 - -
2.5169 16400 0.1211 - -
2.5476 16600 0.1209 - -
2.5783 16800 0.1195 - -
2.6090 17000 0.1192 - -
2.6397 17200 0.1176 - -
2.6703 17400 0.1177 - -
2.7010 17600 0.1168 - -
2.7317 17800 0.1163 - -
2.7624 18000 0.116 - -
2.7931 18200 0.1165 - -
2.8238 18400 0.1157 - -
2.8545 18600 0.1145 - -
2.8852 18800 0.1154 - -
2.9159 19000 0.1153 - -
2.9466 19200 0.1132 - -
2.9773 19400 0.1128 - -
3.0080 19600 0.1121 - -
3.0387 19800 0.1099 - -
3.0694 20000 0.1087 - -
0 0 - - 0.5022
3.0694 20000 - 1.1151 -
3.1001 20200 0.1086 - -
3.1308 20400 0.108 - -
3.1614 20600 0.1087 - -
3.1921 20800 0.1084 - -
3.2228 21000 0.1072 - -
3.2535 21200 0.1087 - -
3.2842 21400 0.1067 - -
3.3149 21600 0.1073 - -
3.3456 21800 0.1067 - -
3.3763 22000 0.1045 - -
3.4070 22200 0.105 - -
3.4377 22400 0.1046 - -
3.4684 22600 0.1061 - -
3.4991 22800 0.1043 - -
3.5298 23000 0.105 - -
3.5605 23200 0.105 - -
3.5912 23400 0.1047 - -
3.6219 23600 0.1034 - -
3.6525 23800 0.1037 - -
3.6832 24000 0.1042 - -
3.7139 24200 0.1038 - -
3.7446 24400 0.1039 - -
3.7753 24600 0.1031 - -
3.8060 24800 0.1019 - -
3.8367 25000 0.1023 - -
3.8674 25200 0.1036 - -
3.8981 25400 0.1022 - -
3.9288 25600 0.102 - -
3.9595 25800 0.1022 - -
3.9902 26000 0.1017 - -
4.0209 26200 0.0997 - -
4.0516 26400 0.0992 - -
4.0823 26600 0.0993 - -
4.1130 26800 0.099 - -
4.1436 27000 0.098 - -
4.1743 27200 0.0986 - -
4.2050 27400 0.0987 - -
4.2357 27600 0.0993 - -
4.2664 27800 0.0991 - -
4.2971 28000 0.0993 - -
4.3278 28200 0.098 - -
4.3585 28400 0.0979 - -
4.3892 28600 0.0967 - -
4.4199 28800 0.0983 - -
4.4506 29000 0.0976 - -
4.4813 29200 0.0975 - -
4.5120 29400 0.0979 - -
4.5427 29600 0.0971 - -
4.5734 29800 0.0972 - -
4.6041 30000 0.0969 - -
4.6347 30200 0.0972 - -
4.6654 30400 0.0975 - -
4.6961 30600 0.0987 - -
4.7268 30800 0.0964 - -
4.7575 31000 0.0974 - -
4.7882 31200 0.0964 - -
4.8189 31400 0.0974 - -
4.8496 31600 0.0974 - -
4.8803 31800 0.0975 - -
4.9110 32000 0.097 - -
4.9417 32200 0.0973 - -
4.9724 32400 0.0973 - -
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 4.0.1
  • PyLate: 1.1.7
  • Transformers: 4.48.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084"
}

PyLate

@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
Downloads last month
6
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000

Finetuned
(572)
this model

Evaluation results