--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:8760 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: nomic-ai/modernbert-embed-base widget: - source_sentence: What is the interpretation described as inappropriate? sentences: - . Factors to be considered in determining the reasonableness of the lawyer’s expectation of confidentiality include the sensitivity of the information and the extent to which the privacy of the communication is protected by law or by a confidentiality agreement - . 20 of competition and rests on an inappropriate interpretation of SBA regulation 13 C.F.R. § 125.9(b)(3)(i). See SHS MJAR at 16–23; VCH MJAR at 16–23 - . 29-2, the CIA’s declaration explains in much more detail what is meant by “intelligence sources and methods” or “intelligence activities,” see Third Lutz Decl. ¶–30 - source_sentence: What is the source of the information regarding Senetas's knowledge about FDA approval? sentences: - . . . the exemption under which the deletion is made, shall be indicated at the place in the record where such deletion is made.” Id. Finally, the FOIA provides that “a court shall accord substantial weight to an affidavit of an agency concerning the agency’s determination as to technical feasibility under . . . subsection (b).” Id. § 552(a)(4)(B) - . 52 Senetas asserts that it learned about the plan to discontinue seeking FDA approval for DR’s products in September of 2018 after the decision had been made without any Board involvement. Galbally Dep. Tr. 66:19-23 - . Conclusion Video footage, like social media evidence, is susceptible to alteration, and the increased availability of new technology, particularly the advent of image-generating artificial intelligence, may present unique challenges in authenticating videos and photographs - source_sentence: What does Class Deviation CD-2020-14 allow for at the contract level? sentences: - social media company that 7At trial, the State had attempted to introduce evidence that was purportedly a printout from the MySpace page of the girlfriend of the defendant (whose nickname was allegedly “Boozy”) to demonstrate that the girlfriend had threatened a State’s witness - .” Supplement 2 to Class Deviation CD-2020-14 (Supplement 2), AR at 2904. The Senior Procurement Executive further elaborated that Class Deviation CD-2020-14 “allowed for the use of ‘unpriced labor’ categories at the contract level for certain IDIQ multiple-award contracts.” Id - . Circuit has recognized that, separate from claims seeking relief for specific requests made under the FOIA, requesting parties may also assert a “claim that an agency policy or practice will impair the party’s lawful access to information in the future.” Payne Enters., Inc. v. United States, 837 F.2d 486, 491 (D.C. Cir. 1988) (emphasis in original); 31 accord Newport Aeronautical Sales v - source_sentence: What should the agency describe about the non-exempt material in a document? sentences: - . A straightforward reading of the 2019 NDAA reveals that the Commission’s members are “temporary” federal employees. The Commission “shall be considered . . . a temporary organization under [5 U.S.C. § 3161].” Pub. L. No. 115-232, § 1051(a)(2). The Commission’s 15 members are “appointed for the life of the Commission” and are “Federal employees.” Id. § 1051(a)(4)(A), (6)–(7) - .15 Posteriormente, en armonía con el marco constitucional y doctrinario previamente reseñado, el 13 de julio de 2011, nuestra Legislatura aprobó, la Ley del Derecho sobre la Propia Imagen o Ley Núm. 139-201116. Dicho precepto legal estatuye una causa de acción en daños y perjuicios debido al uso no autorizado de la imagen con fines comerciales o publicitarios - . To this end, the Circuit has said that “[i]n addition to a statement of its reasons, an agency should also describe what proportion of the information in a document is non-exempt and how that material is dispersed throughout the document.” Id - source_sentence: Which offeror is mentioned as getting in if there is a points discrepancy? sentences: - . at 9:14–19 (“[I]f an offeror does not have the same number of points, if it’s the 130th offeror and it doesn’t have the same number of points as the 90th offeror, then the solicitation says the 90th offeror gets in and the 130th doesn’t.”) - '. But the State had to establish that the communications were the handiwork of the defendant. It was in that context that temporal proximity came into play: The timing of the communications relative to other events connecting the defendant to the alleged crime was circumstantial evidence of the defendant’s authorship. Id. at 674-76' - . Since the plaintiff does not address this issue in its sur-reply brief in No. 11-445, and because the plaintiff does not ask the Court to direct the DOJ to produce Document 3 to the plaintiff, the plaintiff does not appear to continue to challenge the DOJ’s decision to withhold Document 3. 140 recorded decision to implement the opinion.” Id. at 32 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB) results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.582135523613963 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7494866529774127 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.795687885010267 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8572895277207392 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.582135523613963 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24982888432580422 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1591375770020534 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08572895277207392 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.582135523613963 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7494866529774127 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.795687885010267 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8572895277207392 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7211793259435271 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6775296600501939 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6827316333877884 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.5657084188911704 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7330595482546202 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7915811088295688 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8531827515400411 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5657084188911704 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24435318275154005 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.15831622176591376 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08531827515400411 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5657084188911704 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7330595482546202 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7915811088295688 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8531827515400411 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7102670568981261 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6645362765229291 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6695389256684248 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.5410677618069816 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7063655030800822 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7659137577002053 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8305954825462012 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5410677618069816 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2354551676933607 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.15318275154004105 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08305954825462013 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5410677618069816 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7063655030800822 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7659137577002053 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8305954825462012 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6839216686374571 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6371842508392814 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6427516419970609 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.4887063655030801 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6581108829568788 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7176591375770021 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7802874743326489 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.4887063655030801 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2193702943189596 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.14353182751540042 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.07802874743326488 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.4887063655030801 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.6581108829568788 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7176591375770021 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.7802874743326489 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6318826024721981 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5846004041589256 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5917468903182894 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.3798767967145791 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.5462012320328542 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6139630390143738 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.704312114989733 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3798767967145791 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1820670773442847 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.12279260780287474 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0704312114989733 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.3798767967145791 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.5462012320328542 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.6139630390143738 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.704312114989733 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5333651837657117 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4796983475114887 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4877644055271696 name: Cosine Map@100 --- # Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB) This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("AdamLucek/modernbert-embed-quickb") # Run inference sentences = [ 'Which offeror is mentioned as getting in if there is a points discrepancy?', '. at 9:14–19 (“[I]f an offeror does not have the same number of points, if it’s the 130th offeror and it doesn’t have the same number of points as the 90th offeror, then the solicitation says the 90th offeror gets in and the 130th doesn’t.”)', '. Since the plaintiff does not address this issue in its sur-reply brief in No. 11-445, and because the plaintiff does not ask the Court to direct the DOJ to produce Document 3 to the plaintiff, the plaintiff does not appear to continue to challenge the DOJ’s decision to withhold Document 3. 140 recorded decision to implement the opinion.” Id. at 32', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.5821 | 0.5657 | 0.5411 | 0.4887 | 0.3799 | | cosine_accuracy@3 | 0.7495 | 0.7331 | 0.7064 | 0.6581 | 0.5462 | | cosine_accuracy@5 | 0.7957 | 0.7916 | 0.7659 | 0.7177 | 0.614 | | cosine_accuracy@10 | 0.8573 | 0.8532 | 0.8306 | 0.7803 | 0.7043 | | cosine_precision@1 | 0.5821 | 0.5657 | 0.5411 | 0.4887 | 0.3799 | | cosine_precision@3 | 0.2498 | 0.2444 | 0.2355 | 0.2194 | 0.1821 | | cosine_precision@5 | 0.1591 | 0.1583 | 0.1532 | 0.1435 | 0.1228 | | cosine_precision@10 | 0.0857 | 0.0853 | 0.0831 | 0.078 | 0.0704 | | cosine_recall@1 | 0.5821 | 0.5657 | 0.5411 | 0.4887 | 0.3799 | | cosine_recall@3 | 0.7495 | 0.7331 | 0.7064 | 0.6581 | 0.5462 | | cosine_recall@5 | 0.7957 | 0.7916 | 0.7659 | 0.7177 | 0.614 | | cosine_recall@10 | 0.8573 | 0.8532 | 0.8306 | 0.7803 | 0.7043 | | **cosine_ndcg@10** | **0.7212** | **0.7103** | **0.6839** | **0.6319** | **0.5334** | | cosine_mrr@10 | 0.6775 | 0.6645 | 0.6372 | 0.5846 | 0.4797 | | cosine_map@100 | 0.6827 | 0.6695 | 0.6428 | 0.5917 | 0.4878 | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 8,760 training samples * Columns: anchor and positive * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What is being compared in the Circuit's statement? | .2d at 1389–90. The Circuit rejected this analogy, stating that, in contrast to the CIA Act, the NSA Act “protects not only organizational matters . . . but also ‘any information with respect to the activities’ of the NSA.” Id. at 1390 | | What type of internal documents used by the CIA in FOIA requests is mentioned? | . 108 Accordingly, the Court holds that certain specific categories of information withheld by the CIA in this case pursuant to § 403g clearly fall outside that provision’s scope, including (1) internal templates utilized by the CIA in tasking FOIA requests, (2) internal rules, policies and procedures governing FOIA processing, and (7) information about the CIA’s “core functions,” including | | How many documents did the CIA withhold under Exemption 2? | . The CIA states in its declaration that all thirteen documents withheld under 38 The plaintiff previously indicated that it intended to challenge Exemption 2 withholding decisions made by the ODNI as well. See Hackett Decl. Ex. E at 1, ECF No. 29-8. The plaintiff, however, does not pursue that challenge in its opposition to the defendants’ motions for summary judgment in No. 11-445 | * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.5839 | 10 | 67.1727 | - | - | - | - | - | | 1.0 | 18 | - | 0.6999 | 0.6820 | 0.6577 | 0.5988 | 0.4855 | | 1.1168 | 20 | 32.4667 | - | - | - | - | - | | 1.7007 | 30 | 27.9435 | - | - | - | - | - | | 2.0 | 36 | - | 0.7167 | 0.7002 | 0.6764 | 0.6233 | 0.5187 | | 2.2336 | 40 | 22.2924 | - | - | - | - | - | | 2.8175 | 50 | 20.5125 | - | - | - | - | - | | 3.0 | 54 | - | 0.7190 | 0.7080 | 0.6824 | 0.6318 | 0.5339 | | 3.3504 | 60 | 18.3621 | - | - | - | - | - | | **3.8175** | **68** | **-** | **0.7212** | **0.7103** | **0.6839** | **0.6319** | **0.5334** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.4.0 - Transformers: 4.48.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```