scion-minilm-l6-v3 / README.md
tjohn327's picture
Fine-tuned all-mpnet-base-v2 for SCION RAG retrieval
a953181 verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:46618
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: What does it mean for a packet to be authorized, as mentioned in
the document?
sentences:
- '<title>Creating a Secure Underlay for the Internet</title>
<section>4.5 Routing Logic at PoPs </section>
<content>
Through a number of key design principles and by leveraging the secure backbone
for internal routing, SBAS is able to disseminate routes securely to customers
and out to the Internet. Using a strict priority hierarchy on the control plane,
traffic to/from customers benefits from strong hijack resilience.
</content>'
- '<title>We recommend reading the following chapters to obtain a basic understanding
of SCION. Chapter What to Read Chapter 1 1 Introduction</title>
<section>25.3 Inter-domain Multipath Routing Protocols </section>
<content>
. Routing Deflection [558] allows endpoints to deflect their traffic at certain
BGP routers to choose different paths. While this approach can be incrementally
deployed with minimal changes to BGP, it only provides coarse-grained path control.
</content>'
- '<title>Formal Verification of Secure Forwarding Protocols</title>
<section>B. State</section>
<content>
A packet consists of the desired future path fut, and the (presumed) traversed
path past in the reverse direction. The full path is rev(past(m)) • fut(m). While
this splitting of the path simplifies our proofs, the forwarding path could equivalently
be defined as a single sequence with a moving pointer indicating the current position
on the path. We call a packet m authorized, if fut(m) ∈ auth a . Additionally,
each packet records a path hist, also in reverse direction. It represents the
packet''s actual trajectory and is used to express security properties. This can
be seen as a history variable.
</content>'
- source_sentence: _FLASHBACK
sentences:
- '<title>Anycast in the SCION Internet Architecture</title>
<section>1.1 Project Goal </section>
<content>
From a technical point of view, these designs for replicated services in SCION
do not necessarily need to work in the same way as anycast in the current internet.
It only needs to provide a conceptually similar solution, solving the same problem
as anycast does for the current internet. Users should be able to use a single
address or name to access a replicated internet service, and with that end up
connected to the best replica. The best replica does not always have to be the
one with the lowest latency or smallest geographical distance, it could also be
the replica that has the highest available bandwidth or lowest load, or a combination
of any of these.
</content>'
- '<title>Unknown Title</title>
<section>4.3 The API </section>
<content>
• PathProcessor. A path processor, as defined in the previous chapter. Has the
ability to send packets on specific paths over any of the connections associated
with it. Path processors are also receive extensions and hence can intercept incoming
packets. The difference between a path processor and a receive extension is that
the root path processor of a connection can be changed at any point in time during
the lifetime of a connection (hot swapping), while the receive extension is fixed
throughout the lifetime of a connection. By using a fixed receive extension to
handle and reply to latency probes, it becomes possible to change the path processor
without breaking the ability of the other peer to perform latency probing. As
such, the design foresees that each path processor only handles incoming packets
destined directly to it (e.g. latency probe replies), while the receive extension
has to handle any possible incoming packets from path processors of the other
peer (e.g. latency probes).
</content>'
- '<title>SCION Control Plane</title>
<url>https://www.ietf.org/archive/id/draft-dekater-scion-controlplane-07.html</url>
<section>5.Path Lookup - 5.2.Behavior of Actors in the Lookup Process</section>
<content>
Expand the source wildcard into separate requests for each reachable core AS in
the source ISD.¶
For each core segment request;¶
If possible, return matching core segments from cache;¶
Otherwise, request the core segments from the Control Services of each reachable
core AS at the source of the core segment, and then add the retrieved core segments
to the cache.¶
If possible, return matching core segments from cache;¶
Otherwise, request the core segments from the Control Services of each reachable
core AS at the source of the core segment, and then add the retrieved core segments
to the cache.¶
In the case of a down segment request:¶
Expand the source wildcard into separate requests for every core AS in the destination
ISD (destination ISD refers to the ISD to which the destination endpoint belongs).¶
For each segment request;¶
If possible, return matching down segments from cache;¶
</content>'
- source_sentence: What does the document claim about the relationship between end-host
path selection and the convergence axiom?
sentences:
- '<url>https://github.com/netsec-ethz/scion-apps/blob/master/webapp/development.md</url>
<content>
# Webapp Construction and Design
Webapp is a go application designed to operate a web server for purposes of visualizing
and testing the SCION infrastructure. Webapp occupies a strange place in the SCIONLab
ecosystem, in that, it draws from a wide variety of sources to provide testing
and visualization features so a list of [dependencies](dependencies.md) has been
developed for maintenance purposes. There isn''t one central source or API for
the information webapp uses to interrogate SCIONLab, thus webapp may do the following:
* Read from environment variables.
* Scan SCION''s logs.
* Scan SCION''s directory structure.
* Call third-party service APIs.
* Request static configuration from a SCIONLab-maintained location.
* Execute bash scripts.
* Execute SCION or SCIONLab tools and apps.
* Read from SCION''s databases.
* Make connections to SCION services, like the SCION Daemon.
</content>'
- '<title> - Ceremony administrator role - Phase 2 - Creation of TRC Payload</title>
<url>https://docs.scion.org/en/latest/cryptography/trc-signing-ceremony-phases-sensitive.html</url>
<content>
Connect the *USB flash drive* to your device, and copy the TRC payload file to
the root directory, then disconnect the *USB flash drive*. Hand out the *USB flash
drive*
to the *voting representatives*.
The *voting representatives* proceed to check the contents of the TRC payload
file by computing the SHA256 sum. Over the duration of the checks, keep the
SHA256 sum of the file available on the monitor for inspection.
This phase concludes once every *voting representative* confirms that the
contents of the TRC payload are correct. Once that happens, announce that
**Phase 2** has successfully concluded.
</content>'
- '<title>An Axiomatic Perspective on the Performance Effects of End-Host Path Selection</title>
<section>6.1.4 Convergence (Axiom 3 </section>
<content>
. Similar to Insight 8, the reason for this improvement is the de-synchronization
of the continuity time brought about by agent migration, which reduces the variance
of the aggregate additive increase and thus the flow-volume fluctuations. Contrary
to the widespread belief that end-host path selection necessarily hurts stability
(in the sense of the convergence axiom), our analysis thus shows that network
stability can in fact benefit from end-host path selection. 6.1.5 Fairness (Axiom
4). Given simultaneous sending start and no path selection, perfect synchronization
implies that all agents always have exactly the same congestion-window size, i.e.,
𝜂 = 0. Moreover, Zarchy et generally tend to come close to perfect fairness [41]
. To find the worst-case effects of end-host path selection, we thus assume perfect
fairness in the scenario without path selection:
</content>'
- source_sentence: How is the value of Acci+1 computed according to the document?
sentences:
- '<title>SCION Data Plane</title>
<url>https://www.ietf.org/archive/id/draft-dekater-scion-dataplane-04.html</url>
<section>4.Path Authorization - 4.2.Path Initialization and Packet Processing</section>
<content>
If the just calculated MACVerifyi does not match the MACi in the Hop Field of
the current ASi, drop the packet.¶
Compute the value of Acci+1. For this, use the formula in Section 4.1.1.2. Replace
Acci in the formula with the current value of Acc as set in the Acc field of the
current Info Field.¶
Replace the value of the Acc field in the current Info Field with the just calculated
value of Acci+1.¶
Case 2 The packet traverses the path segment in construction direction (C = "1")
where the path segment includes a peering Hop Field (P = "1") and the current
Hop Field is the peering Hop Field (i.e. the current hop is either the last hop
of the first segment or the first hop of the second segment). In this case, the
egress border router MUST take the following steps:¶
</content>'
- '<title>Debuglet: Programmable and Verifiable Inter-domain Network Telemetry</title>
<section>C. Control Plane</section>
<content>
. The function checks by looking up the ExecutionSlotsMap, when the first available
time slot that both to-be-involved executors can accommodate the measurement would
be, and how many execution slots need to be purchased at each executor. The function
returns the price that needs to be paid and the first possible time slot to the
initiator.
</content>'
- '<title>We recommend reading the following chapters to obtain a basic understanding
of SCION. Chapter What to Read Chapter 1 1 Introduction</title>
<section>17.5 Post-Quantum Cryptography </section>
<content>
. In this example, user U 1 trusts CA 1 more than CA 2 for issuing certificates
for domain D because CA 1 supports multi-perspective domain validation [1] ,
while user U 2 trusts CA 2 more than CA 1 because CA 2 is an American CA and D''s
toplevel domain is .us. In this example, U 1 should be able to express higher
trust 18.1 Trust Model in CA 1 than in CA 2 , while retaining the ability to use
certificates issued by CA 2 .
</content>'
- source_sentence: How many active ASes are reported as of the CIDR report mentioned
in the document?
sentences:
- '<title>The Case for In-Network Replay Suppression</title>
<section>4.3 Optimization Problem </section>
<content>
Equation 3 describes the size m of each BF as a function of the BF rotation interval
L, the number N of BFs, the number k of necessary hash functions, and the BF''s
target false-positive rate (fp). Since an incoming packet is checked against all
BFs, the overall target false-positive rate is 1 -(1fp) N . To determine the value
for fp, we consider the average number of packets that a router receives in an
interval L (which is r •L, where r is the incoming packet rate). Using the BF
equations, we get fp = (1e k•x•L/m ) k and by combining it with the equation for
the size of a BF, we obtain Equation 3. The inequality indicates that any larger
value for m yields a lower false-positive than fp.
</content>'
- '<title>Pervasive Internet-Wide Low-Latency Authentication</title>
<section>C. AS as Opportunistically Trusted Entity</section>
<content>
Each entity in the Internet is part of at least one AS, which is under the control
of a single administrative entity. This facilitates providing a common service
that authenticates endpoints (e.g., using a challenge-response protocol or preinstalled
keys and certificates) and issues certificates. Another advantage is the typically
close relationship between an endpoint and its AS, which allows for a stronger
leverage in case of misbehavior. Since it is infeasible for an endpoint to authenticate
each AS by itself (there are ∼71 000 active ASes according to the CIDR report [4]
), RPKI is used as a trust anchor to authenticate ASes. RPKI resource issuers
assign an AS a set of IP address prefixes that this AS is allowed to originate.
An AS then issues short-lived certificates for its authorized IP address ranges.
</content>'
- '<title>Unknown Title</title>
<section>. Paths emission per unit of traffic</section>
<content>
The reason is that the number of BGP paths is less than  for most AS pairs. This
figure also suggests that the -greenest paths average emission differs from the
greenest path emission and the n-greenest paths average emission for both beaconing
algorithms. However, for every percentile, this difference in SCI-GIB is about
 times less than the one in SCI-BCE. This means that the -greenest paths average
emission in SCI-GIB is much closer to the greenest path emission than SCI-BCE.
Also, for every percentile, the difference between the -greenest paths average
emissions of the two different beaconing algorithms is  times more than the difference
between their greenest path emissions. From both of these observations, we conclude
that SCI-GIB is better at finding the greenest set of paths
</content>'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: val ir eval
type: val-ir-eval
metrics:
- type: cosine_accuracy@1
value: 0.6293793793793794
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8215715715715716
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8763763763763763
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9309309309309309
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6293793793793794
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2739406072739406
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17547547547547548
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09334334334334335
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6291916916916916
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8209737515293072
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8758689244800356
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9305555555555556
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7827567470448342
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7351305670750117
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7379411341051004
name: Cosine Map@100
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-minilm-l6-v3")
# Run inference
sentences = [
'How many active ASes are reported as of the CIDR report mentioned in the document?',
'<title>Pervasive Internet-Wide Low-Latency Authentication</title>\n<section>C. AS as Opportunistically Trusted Entity</section>\n<content>\nEach entity in the Internet is part of at least one AS, which is under the control of a single administrative entity. This facilitates providing a common service that authenticates endpoints (e.g., using a challenge-response protocol or preinstalled keys and certificates) and issues certificates. Another advantage is the typically close relationship between an endpoint and its AS, which allows for a stronger leverage in case of misbehavior. Since it is infeasible for an endpoint to authenticate each AS by itself (there are ∼71 000 active ASes according to the CIDR report [4] ), RPKI is used as a trust anchor to authenticate ASes. RPKI resource issuers assign an AS a set of IP address prefixes that this AS is allowed to originate. An AS then issues short-lived certificates for its authorized IP address ranges.\n</content>',
'<title>Unknown Title</title>\n<section>\uf735.\uf731 Paths emission per unit of traffic</section>\n<content>\nThe reason is that the number of BGP paths is less than \uf735 for most AS pairs. This figure also suggests that the \uf735-greenest paths average emission differs from the greenest path emission and the n-greenest paths average emission for both beaconing algorithms. However, for every percentile, this difference in SCI-GIB is about \uf733 times less than the one in SCI-BCE. This means that the \uf735-greenest paths average emission in SCI-GIB is much closer to the greenest path emission than SCI-BCE. Also, for every percentile, the difference between the \uf735-greenest paths average emissions of the two different beaconing algorithms is \uf732 times more than the difference between their greenest path emissions. From both of these observations, we conclude that SCI-GIB is better at finding the greenest set of paths\n</content>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `val-ir-eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6294 |
| cosine_accuracy@3 | 0.8216 |
| cosine_accuracy@5 | 0.8764 |
| cosine_accuracy@10 | 0.9309 |
| cosine_precision@1 | 0.6294 |
| cosine_precision@3 | 0.2739 |
| cosine_precision@5 | 0.1755 |
| cosine_precision@10 | 0.0933 |
| cosine_recall@1 | 0.6292 |
| cosine_recall@3 | 0.821 |
| cosine_recall@5 | 0.8759 |
| cosine_recall@10 | 0.9306 |
| **cosine_ndcg@10** | **0.7828** |
| cosine_mrr@10 | 0.7351 |
| cosine_map@100 | 0.7379 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 46,618 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 21.15 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 200.21 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What specific snippet of the resolver-recv-answer-for-client rule is presented in the document?</code> | <code><title>A Formal Framework for End-to-End DNS Resolution</title><br><section>3.2.3 DNS Dynamics. </section><br><content><br>This rule [resolver-recv-answer-for-client] has 74 LOC with nontrivial auxiliary functions and rule conditions. For the simplicity of our presentation, we only show the most important snippet with respect to positive caching. 5 The rule applies for a response that authoritatively answers a client query. More specifically, a temporary cache is created from the data contained in the response (line 8), which is then used for the lookup (line 10). Note that we cannot perform the lookup directly on the actual cache as case A of the resolver algorithm should only consider the data in the response, not in the cache. Also note that we look only at the data in the answer section (ANS, line 2) for the temporary positive cache as the entire rule is concerned with authoritative answers. Finally, we insert the data from the response into the actual cache and use this updated cache on th...</code> |
| <code>What is the relationship between early adopters and the potential security improvements mentioned for SBAS in the document?</code> | <code><title>Creating a Secure Underlay for the Internet</title><br><section>9 Related Work </section><br><content><br>. While several challenges still exist when deploying SBAS in a production setting, our survey shows a potential path forward and our experimental results show promise that sizable security improvements can be achieved with even a small set of early adopters. We hope that SBAS revitalizes the quest for secure inter-domain routing.<br></content></code> |
| <code>How does the evaluation in this study focus on user-driven path control within SCION?</code> | <code><title>Evaluation of SCION for User-driven Path Control: a Usability Study</title><br><section>ABSTRACT</section><br><content><br>The UPIN (User-driven Path verification and control in Inter-domain Networks) project aims to implement a way for users of a network to control how their data is traversing it. In this paper we investigate the possibilities and limitations of SCION for user-driven path control. Exploring several aspects of the performance of a SCION network allows us to define the most efficient path to assign to a user, following specific requests. We extensively analyze multiple paths, specifically focusing on latency, bandwidth and data loss, in SCIONLab, an experimental testbed and implementation of a SCION network. We gather data on these paths and store it in a database, that we then query to select the best path to give to a user to reach a destination, following their request on performance or devices to exclude for geographical or sovereignty reasons. Results indicate our so...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | val-ir-eval_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:--------------------------:|
| 0.1372 | 100 | - | 0.6950 |
| 0.2743 | 200 | - | 0.7313 |
| 0.4115 | 300 | - | 0.7443 |
| 0.5487 | 400 | - | 0.7573 |
| 0.6859 | 500 | 0.3862 | 0.7576 |
| 0.8230 | 600 | - | 0.7627 |
| 0.9602 | 700 | - | 0.7662 |
| 1.0 | 729 | - | 0.7709 |
| 1.0974 | 800 | - | 0.7705 |
| 1.2346 | 900 | - | 0.7718 |
| 1.3717 | 1000 | 0.2356 | 0.7747 |
| 1.5089 | 1100 | - | 0.7742 |
| 1.6461 | 1200 | - | 0.7759 |
| 1.7833 | 1300 | - | 0.7776 |
| 1.9204 | 1400 | - | 0.7807 |
| 2.0 | 1458 | - | 0.7815 |
| 2.0576 | 1500 | 0.1937 | 0.7789 |
| 2.1948 | 1600 | - | 0.7814 |
| 2.3320 | 1700 | - | 0.7819 |
| 2.4691 | 1800 | - | 0.7823 |
| 2.6063 | 1900 | - | 0.7827 |
| 2.7435 | 2000 | 0.1758 | 0.7828 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->