Update README.md
Browse files
README.md
CHANGED
|
@@ -54,427 +54,111 @@ model-index:
|
|
| 54 |
name: F1 Weighted
|
| 55 |
---
|
| 56 |
|
| 57 |
-
#
|
| 58 |
|
| 59 |
-
This
|
|
|
|
| 60 |
|
| 61 |
-
|
|
|
|
| 62 |
|
| 63 |
-
|
| 64 |
-
- **Model Type:** Cross Encoder
|
| 65 |
-
- **Base model:** [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) <!-- at revision 987607455c61e7a5bbc85f7758e0512ea6d0ae4c -->
|
| 66 |
-
- **Maximum Sequence Length:** 7999 tokens
|
| 67 |
-
- **Number of Output Labels:** 3 labels
|
| 68 |
-
- **Training Dataset:**
|
| 69 |
-
- [all-nli-distill](https://huggingface.co/datasets/dleemiller/all-nli-distill)
|
| 70 |
-
- **Language:** en
|
| 71 |
-
<!-- - **License:** Unknown -->
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 76 |
-
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
|
| 77 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 78 |
-
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
|
| 79 |
-
|
| 80 |
-
## Usage
|
| 81 |
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
| 85 |
|
| 86 |
-
|
| 87 |
-
pip install -U sentence-transformers
|
| 88 |
-
```
|
| 89 |
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
-
# Download from the 🤗 Hub
|
| 95 |
-
model = CrossEncoder("cross_encoder_model_id")
|
| 96 |
-
# Get scores for pairs of texts
|
| 97 |
-
pairs = [
|
| 98 |
-
['Two women are embracing while holding to go packages.', 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'],
|
| 99 |
-
['Two women are embracing while holding to go packages.', 'Two woman are holding packages.'],
|
| 100 |
-
['Two women are embracing while holding to go packages.', 'The men are fighting outside a deli.'],
|
| 101 |
-
['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids in numbered jerseys wash their hands.'],
|
| 102 |
-
['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids at a ballgame wash their hands.'],
|
| 103 |
-
]
|
| 104 |
-
scores = model.predict(pairs)
|
| 105 |
-
print(scores.shape)
|
| 106 |
-
# (5, 3)
|
| 107 |
-
```
|
| 108 |
|
| 109 |
-
|
| 110 |
-
### Direct Usage (Transformers)
|
| 111 |
|
| 112 |
-
|
| 113 |
|
| 114 |
-
|
| 115 |
-
-->
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
|
| 120 |
-
|
|
|
|
| 121 |
|
| 122 |
-
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
-
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
-
### Out-of-Scope Use
|
| 129 |
|
| 130 |
-
|
| 131 |
-
-->
|
| 132 |
|
| 133 |
-
|
|
|
|
| 134 |
|
| 135 |
-
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
-
|
|
|
|
| 138 |
|
| 139 |
-
|
| 140 |
-
|
|
|
|
|
|
|
| 141 |
|
| 142 |
-
|
| 143 |
-
|:-------------|:-----------|:------------|
|
| 144 |
-
| **f1_macro** | **0.8432** | **0.8443** |
|
| 145 |
-
| f1_micro | 0.8435 | 0.8447 |
|
| 146 |
-
| f1_weighted | 0.8439 | 0.845 |
|
| 147 |
|
| 148 |
-
|
| 149 |
-
## Bias, Risks and Limitations
|
| 150 |
|
| 151 |
-
|
| 152 |
-
|
| 153 |
|
| 154 |
-
|
| 155 |
-
### Recommendations
|
| 156 |
|
| 157 |
-
|
| 158 |
-
-->
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
#### all-nli-distill
|
| 165 |
-
|
| 166 |
-
* Dataset: [all-nli-distill](https://huggingface.co/datasets/dleemiller/all-nli-distill) at [6907d07](https://huggingface.co/datasets/dleemiller/all-nli-distill/tree/6907d071937601df154a4641e824cbce44e8fd41)
|
| 167 |
-
* Size: 942,069 training samples
|
| 168 |
-
* Columns: <code>premise</code>, <code>hypothesis</code>, <code>label</code>, and <code>hash</code>
|
| 169 |
-
* Approximate statistics based on the first 1000 samples:
|
| 170 |
-
| | premise | hypothesis | label | hash |
|
| 171 |
-
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|
|
| 172 |
-
| type | string | string | int | string |
|
| 173 |
-
| details | <ul><li>min: 7 characters</li><li>mean: 87.47 characters</li><li>max: 485 characters</li></ul> | <ul><li>min: 3 characters</li><li>mean: 45.98 characters</li><li>max: 157 characters</li></ul> | <ul><li>0: ~32.70%</li><li>1: ~34.20%</li><li>2: ~33.10%</li></ul> | <ul><li>min: 32 characters</li><li>mean: 32.0 characters</li><li>max: 32 characters</li></ul> |
|
| 174 |
-
* Samples:
|
| 175 |
-
| premise | hypothesis | label | hash |
|
| 176 |
-
|:--------------------------------------------------------------------------------------|:---------------------------------------|:---------------|:----------------------------------------------|
|
| 177 |
-
| <code>somehow, somewhere.</code> | <code>Someplace, in some way.</code> | <code>1</code> | <code>9a14d41bdf965ed999446ea11dbf5b67</code> |
|
| 178 |
-
| <code>A boy is sitting on a boat with two flags.</code> | <code>A blonde person sitting.</code> | <code>2</code> | <code>758664a444dd4c02d89220da2ab499ac</code> |
|
| 179 |
-
| <code>A asian male suit clad, uses a umbrella to shield himself from the rain.</code> | <code>He is late for a meeting.</code> | <code>2</code> | <code>7e1155728f9cf33655076ec6b36cdb10</code> |
|
| 180 |
-
* Loss: <code>__main__.PrecomputedDistillationLoss</code>
|
| 181 |
-
|
| 182 |
-
### Evaluation Dataset
|
| 183 |
-
|
| 184 |
-
#### all-nli-distill
|
| 185 |
-
|
| 186 |
-
* Dataset: [all-nli-distill](https://huggingface.co/datasets/dleemiller/all-nli-distill) at [6907d07](https://huggingface.co/datasets/dleemiller/all-nli-distill/tree/6907d071937601df154a4641e824cbce44e8fd41)
|
| 187 |
-
* Size: 19,657 evaluation samples
|
| 188 |
-
* Columns: <code>premise</code>, <code>hypothesis</code>, <code>label</code>, and <code>hash</code>
|
| 189 |
-
* Approximate statistics based on the first 1000 samples:
|
| 190 |
-
| | premise | hypothesis | label | hash |
|
| 191 |
-
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|
|
| 192 |
-
| type | string | string | int | string |
|
| 193 |
-
| details | <ul><li>min: 16 characters</li><li>mean: 75.01 characters</li><li>max: 229 characters</li></ul> | <ul><li>min: 11 characters</li><li>mean: 37.66 characters</li><li>max: 116 characters</li></ul> | <ul><li>0: ~33.60%</li><li>1: ~33.10%</li><li>2: ~33.30%</li></ul> | <ul><li>min: 32 characters</li><li>mean: 32.0 characters</li><li>max: 32 characters</li></ul> |
|
| 194 |
-
* Samples:
|
| 195 |
-
| premise | hypothesis | label | hash |
|
| 196 |
-
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|:----------------------------------------------|
|
| 197 |
-
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>2</code> | <code>ee3806dad2b757a8e131aa50f2b73ec9</code> |
|
| 198 |
-
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>1</code> | <code>563afee877ed42f33dafe7c76fe9604b</code> |
|
| 199 |
-
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>0</code> | <code>fd7c1382a8321094d60105ff37c038da</code> |
|
| 200 |
-
* Loss: <code>__main__.PrecomputedDistillationLoss</code>
|
| 201 |
-
|
| 202 |
-
### Training Hyperparameters
|
| 203 |
-
#### Non-Default Hyperparameters
|
| 204 |
-
|
| 205 |
-
- `eval_strategy`: steps
|
| 206 |
-
- `per_device_train_batch_size`: 512
|
| 207 |
-
- `per_device_eval_batch_size`: 512
|
| 208 |
-
- `learning_rate`: 0.0002
|
| 209 |
-
- `num_train_epochs`: 5
|
| 210 |
-
- `warmup_ratio`: 0.1
|
| 211 |
-
- `bf16`: True
|
| 212 |
-
- `load_best_model_at_end`: True
|
| 213 |
-
|
| 214 |
-
#### All Hyperparameters
|
| 215 |
-
<details><summary>Click to expand</summary>
|
| 216 |
-
|
| 217 |
-
- `overwrite_output_dir`: False
|
| 218 |
-
- `do_predict`: False
|
| 219 |
-
- `eval_strategy`: steps
|
| 220 |
-
- `prediction_loss_only`: True
|
| 221 |
-
- `per_device_train_batch_size`: 512
|
| 222 |
-
- `per_device_eval_batch_size`: 512
|
| 223 |
-
- `per_gpu_train_batch_size`: None
|
| 224 |
-
- `per_gpu_eval_batch_size`: None
|
| 225 |
-
- `gradient_accumulation_steps`: 1
|
| 226 |
-
- `eval_accumulation_steps`: None
|
| 227 |
-
- `torch_empty_cache_steps`: None
|
| 228 |
-
- `learning_rate`: 0.0002
|
| 229 |
-
- `weight_decay`: 0.0
|
| 230 |
-
- `adam_beta1`: 0.9
|
| 231 |
-
- `adam_beta2`: 0.999
|
| 232 |
-
- `adam_epsilon`: 1e-08
|
| 233 |
-
- `max_grad_norm`: 1.0
|
| 234 |
-
- `num_train_epochs`: 5
|
| 235 |
-
- `max_steps`: -1
|
| 236 |
-
- `lr_scheduler_type`: linear
|
| 237 |
-
- `lr_scheduler_kwargs`: {}
|
| 238 |
-
- `warmup_ratio`: 0.1
|
| 239 |
-
- `warmup_steps`: 0
|
| 240 |
-
- `log_level`: passive
|
| 241 |
-
- `log_level_replica`: warning
|
| 242 |
-
- `log_on_each_node`: True
|
| 243 |
-
- `logging_nan_inf_filter`: True
|
| 244 |
-
- `save_safetensors`: True
|
| 245 |
-
- `save_on_each_node`: False
|
| 246 |
-
- `save_only_model`: False
|
| 247 |
-
- `restore_callback_states_from_checkpoint`: False
|
| 248 |
-
- `no_cuda`: False
|
| 249 |
-
- `use_cpu`: False
|
| 250 |
-
- `use_mps_device`: False
|
| 251 |
-
- `seed`: 42
|
| 252 |
-
- `data_seed`: None
|
| 253 |
-
- `jit_mode_eval`: False
|
| 254 |
-
- `use_ipex`: False
|
| 255 |
-
- `bf16`: True
|
| 256 |
-
- `fp16`: False
|
| 257 |
-
- `fp16_opt_level`: O1
|
| 258 |
-
- `half_precision_backend`: auto
|
| 259 |
-
- `bf16_full_eval`: False
|
| 260 |
-
- `fp16_full_eval`: False
|
| 261 |
-
- `tf32`: None
|
| 262 |
-
- `local_rank`: 0
|
| 263 |
-
- `ddp_backend`: None
|
| 264 |
-
- `tpu_num_cores`: None
|
| 265 |
-
- `tpu_metrics_debug`: False
|
| 266 |
-
- `debug`: []
|
| 267 |
-
- `dataloader_drop_last`: False
|
| 268 |
-
- `dataloader_num_workers`: 0
|
| 269 |
-
- `dataloader_prefetch_factor`: None
|
| 270 |
-
- `past_index`: -1
|
| 271 |
-
- `disable_tqdm`: False
|
| 272 |
-
- `remove_unused_columns`: True
|
| 273 |
-
- `label_names`: None
|
| 274 |
-
- `load_best_model_at_end`: True
|
| 275 |
-
- `ignore_data_skip`: False
|
| 276 |
-
- `fsdp`: []
|
| 277 |
-
- `fsdp_min_num_params`: 0
|
| 278 |
-
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
|
| 279 |
-
- `fsdp_transformer_layer_cls_to_wrap`: None
|
| 280 |
-
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
|
| 281 |
-
- `parallelism_config`: None
|
| 282 |
-
- `deepspeed`: None
|
| 283 |
-
- `label_smoothing_factor`: 0.0
|
| 284 |
-
- `optim`: adamw_torch_fused
|
| 285 |
-
- `optim_args`: None
|
| 286 |
-
- `adafactor`: False
|
| 287 |
-
- `group_by_length`: False
|
| 288 |
-
- `length_column_name`: length
|
| 289 |
-
- `ddp_find_unused_parameters`: None
|
| 290 |
-
- `ddp_bucket_cap_mb`: None
|
| 291 |
-
- `ddp_broadcast_buffers`: False
|
| 292 |
-
- `dataloader_pin_memory`: True
|
| 293 |
-
- `dataloader_persistent_workers`: False
|
| 294 |
-
- `skip_memory_metrics`: True
|
| 295 |
-
- `use_legacy_prediction_loop`: False
|
| 296 |
-
- `push_to_hub`: False
|
| 297 |
-
- `resume_from_checkpoint`: None
|
| 298 |
-
- `hub_model_id`: None
|
| 299 |
-
- `hub_strategy`: every_save
|
| 300 |
-
- `hub_private_repo`: None
|
| 301 |
-
- `hub_always_push`: False
|
| 302 |
-
- `hub_revision`: None
|
| 303 |
-
- `gradient_checkpointing`: False
|
| 304 |
-
- `gradient_checkpointing_kwargs`: None
|
| 305 |
-
- `include_inputs_for_metrics`: False
|
| 306 |
-
- `include_for_metrics`: []
|
| 307 |
-
- `eval_do_concat_batches`: True
|
| 308 |
-
- `fp16_backend`: auto
|
| 309 |
-
- `push_to_hub_model_id`: None
|
| 310 |
-
- `push_to_hub_organization`: None
|
| 311 |
-
- `mp_parameters`:
|
| 312 |
-
- `auto_find_batch_size`: False
|
| 313 |
-
- `full_determinism`: False
|
| 314 |
-
- `torchdynamo`: None
|
| 315 |
-
- `ray_scope`: last
|
| 316 |
-
- `ddp_timeout`: 1800
|
| 317 |
-
- `torch_compile`: False
|
| 318 |
-
- `torch_compile_backend`: None
|
| 319 |
-
- `torch_compile_mode`: None
|
| 320 |
-
- `include_tokens_per_second`: False
|
| 321 |
-
- `include_num_input_tokens_seen`: False
|
| 322 |
-
- `neftune_noise_alpha`: None
|
| 323 |
-
- `optim_target_modules`: None
|
| 324 |
-
- `batch_eval_metrics`: False
|
| 325 |
-
- `eval_on_start`: False
|
| 326 |
-
- `use_liger_kernel`: False
|
| 327 |
-
- `liger_kernel_config`: None
|
| 328 |
-
- `eval_use_gather_object`: False
|
| 329 |
-
- `average_tokens_across_devices`: False
|
| 330 |
-
- `prompts`: None
|
| 331 |
-
- `batch_sampler`: batch_sampler
|
| 332 |
-
- `multi_dataset_batch_sampler`: proportional
|
| 333 |
-
- `router_mapping`: {}
|
| 334 |
-
- `learning_rate_mapping`: {}
|
| 335 |
-
|
| 336 |
-
</details>
|
| 337 |
-
|
| 338 |
-
### Training Logs
|
| 339 |
-
| Epoch | Step | Training Loss | Validation Loss | AllNLI-dev_f1_macro | AllNLI-test_f1_macro |
|
| 340 |
-
|:----------:|:--------:|:-------------:|:---------------:|:-------------------:|:--------------------:|
|
| 341 |
-
| -1 | -1 | - | - | 0.2911 | - |
|
| 342 |
-
| 0.0543 | 100 | 6.5112 | - | - | - |
|
| 343 |
-
| 0.1087 | 200 | 3.7062 | - | - | - |
|
| 344 |
-
| 0.1630 | 300 | 2.8158 | - | - | - |
|
| 345 |
-
| 0.2174 | 400 | 2.4929 | - | - | - |
|
| 346 |
-
| 0.2717 | 500 | 2.3007 | 2.2750 | 0.7475 | - |
|
| 347 |
-
| 0.3261 | 600 | 2.1216 | - | - | - |
|
| 348 |
-
| 0.3804 | 700 | 1.9902 | - | - | - |
|
| 349 |
-
| 0.4348 | 800 | 1.943 | - | - | - |
|
| 350 |
-
| 0.4891 | 900 | 1.8469 | - | - | - |
|
| 351 |
-
| 0.5435 | 1000 | 1.7757 | 1.8039 | 0.7890 | - |
|
| 352 |
-
| 0.5978 | 1100 | 1.7368 | - | - | - |
|
| 353 |
-
| 0.6522 | 1200 | 1.6685 | - | - | - |
|
| 354 |
-
| 0.7065 | 1300 | 1.598 | - | - | - |
|
| 355 |
-
| 0.7609 | 1400 | 1.5582 | - | - | - |
|
| 356 |
-
| 0.8152 | 1500 | 1.5229 | 1.5512 | 0.8052 | - |
|
| 357 |
-
| 0.8696 | 1600 | 1.4953 | - | - | - |
|
| 358 |
-
| 0.9239 | 1700 | 1.4457 | - | - | - |
|
| 359 |
-
| 0.9783 | 1800 | 1.4274 | - | - | - |
|
| 360 |
-
| 1.0326 | 1900 | 1.2831 | - | - | - |
|
| 361 |
-
| 1.0870 | 2000 | 1.1841 | 1.4433 | 0.8147 | - |
|
| 362 |
-
| 1.1413 | 2100 | 1.1605 | - | - | - |
|
| 363 |
-
| 1.1957 | 2200 | 1.1525 | - | - | - |
|
| 364 |
-
| 1.25 | 2300 | 1.1417 | - | - | - |
|
| 365 |
-
| 1.3043 | 2400 | 1.1635 | - | - | - |
|
| 366 |
-
| 1.3587 | 2500 | 1.1386 | 1.3484 | 0.8222 | - |
|
| 367 |
-
| 1.4130 | 2600 | 1.1369 | - | - | - |
|
| 368 |
-
| 1.4674 | 2700 | 1.1333 | - | - | - |
|
| 369 |
-
| 1.5217 | 2800 | 1.1142 | - | - | - |
|
| 370 |
-
| 1.5761 | 2900 | 1.0981 | - | - | - |
|
| 371 |
-
| 1.6304 | 3000 | 1.1037 | 1.3646 | 0.8204 | - |
|
| 372 |
-
| 1.6848 | 3100 | 1.0831 | - | - | - |
|
| 373 |
-
| 1.7391 | 3200 | 1.0799 | - | - | - |
|
| 374 |
-
| 1.7935 | 3300 | 1.063 | - | - | - |
|
| 375 |
-
| 1.8478 | 3400 | 1.0715 | - | - | - |
|
| 376 |
-
| 1.9022 | 3500 | 1.0707 | 1.2478 | 0.8323 | - |
|
| 377 |
-
| 1.9565 | 3600 | 1.047 | - | - | - |
|
| 378 |
-
| 2.0109 | 3700 | 0.9925 | - | - | - |
|
| 379 |
-
| 2.0652 | 3800 | 0.7622 | - | - | - |
|
| 380 |
-
| 2.1196 | 3900 | 0.7608 | - | - | - |
|
| 381 |
-
| 2.1739 | 4000 | 0.7627 | 1.2346 | 0.8346 | - |
|
| 382 |
-
| 2.2283 | 4100 | 0.7728 | - | - | - |
|
| 383 |
-
| 2.2826 | 4200 | 0.7674 | - | - | - |
|
| 384 |
-
| 2.3370 | 4300 | 0.7716 | - | - | - |
|
| 385 |
-
| 2.3913 | 4400 | 0.7728 | - | - | - |
|
| 386 |
-
| 2.4457 | 4500 | 0.7814 | 1.2380 | 0.8360 | - |
|
| 387 |
-
| 2.5 | 4600 | 0.7556 | - | - | - |
|
| 388 |
-
| 2.5543 | 4700 | 0.7698 | - | - | - |
|
| 389 |
-
| 2.6087 | 4800 | 0.7643 | - | - | - |
|
| 390 |
-
| 2.6630 | 4900 | 0.765 | - | - | - |
|
| 391 |
-
| 2.7174 | 5000 | 0.7661 | 1.2012 | 0.8363 | - |
|
| 392 |
-
| 2.7717 | 5100 | 0.7605 | - | - | - |
|
| 393 |
-
| 2.8261 | 5200 | 0.7546 | - | - | - |
|
| 394 |
-
| 2.8804 | 5300 | 0.7572 | - | - | - |
|
| 395 |
-
| 2.9348 | 5400 | 0.7568 | - | - | - |
|
| 396 |
-
| 2.9891 | 5500 | 0.7422 | 1.1767 | 0.8396 | - |
|
| 397 |
-
| 3.0435 | 5600 | 0.5901 | - | - | - |
|
| 398 |
-
| 3.0978 | 5700 | 0.5473 | - | - | - |
|
| 399 |
-
| 3.1522 | 5800 | 0.5463 | - | - | - |
|
| 400 |
-
| 3.2065 | 5900 | 0.5453 | - | - | - |
|
| 401 |
-
| 3.2609 | 6000 | 0.5484 | 1.1911 | 0.8419 | - |
|
| 402 |
-
| 3.3152 | 6100 | 0.5506 | - | - | - |
|
| 403 |
-
| 3.3696 | 6200 | 0.5444 | - | - | - |
|
| 404 |
-
| 3.4239 | 6300 | 0.5496 | - | - | - |
|
| 405 |
-
| 3.4783 | 6400 | 0.5489 | - | - | - |
|
| 406 |
-
| 3.5326 | 6500 | 0.5497 | 1.1816 | 0.8400 | - |
|
| 407 |
-
| 3.5870 | 6600 | 0.5476 | - | - | - |
|
| 408 |
-
| 3.6413 | 6700 | 0.5478 | - | - | - |
|
| 409 |
-
| 3.6957 | 6800 | 0.5444 | - | - | - |
|
| 410 |
-
| 3.75 | 6900 | 0.5493 | - | - | - |
|
| 411 |
-
| 3.8043 | 7000 | 0.5422 | 1.1711 | 0.8440 | - |
|
| 412 |
-
| 3.8587 | 7100 | 0.5434 | - | - | - |
|
| 413 |
-
| 3.9130 | 7200 | 0.5438 | - | - | - |
|
| 414 |
-
| 3.9674 | 7300 | 0.5416 | - | - | - |
|
| 415 |
-
| 4.0217 | 7400 | 0.491 | - | - | - |
|
| 416 |
-
| 4.0761 | 7500 | 0.4108 | 1.1752 | 0.8423 | - |
|
| 417 |
-
| 4.1304 | 7600 | 0.4143 | - | - | - |
|
| 418 |
-
| 4.1848 | 7700 | 0.415 | - | - | - |
|
| 419 |
-
| 4.2391 | 7800 | 0.4118 | - | - | - |
|
| 420 |
-
| 4.2935 | 7900 | 0.4221 | - | - | - |
|
| 421 |
-
| 4.3478 | 8000 | 0.4153 | 1.1767 | 0.8436 | - |
|
| 422 |
-
| 4.4022 | 8100 | 0.4159 | - | - | - |
|
| 423 |
-
| 4.4565 | 8200 | 0.411 | - | - | - |
|
| 424 |
-
| 4.5109 | 8300 | 0.4216 | - | - | - |
|
| 425 |
-
| 4.5652 | 8400 | 0.4163 | - | - | - |
|
| 426 |
-
| 4.6196 | 8500 | 0.4118 | 1.1720 | 0.8429 | - |
|
| 427 |
-
| 4.6739 | 8600 | 0.4198 | - | - | - |
|
| 428 |
-
| 4.7283 | 8700 | 0.4154 | - | - | - |
|
| 429 |
-
| 4.7826 | 8800 | 0.4057 | - | - | - |
|
| 430 |
-
| 4.8370 | 8900 | 0.4098 | - | - | - |
|
| 431 |
-
| **4.8913** | **9000** | **0.4064** | **1.1687** | **0.8432** | **-** |
|
| 432 |
-
| 4.9457 | 9100 | 0.4056 | - | - | - |
|
| 433 |
-
| 5.0 | 9200 | 0.4115 | - | - | - |
|
| 434 |
-
| -1 | -1 | - | - | - | 0.8443 |
|
| 435 |
-
|
| 436 |
-
* The bold row denotes the saved checkpoint.
|
| 437 |
-
|
| 438 |
-
### Framework Versions
|
| 439 |
-
- Python: 3.12.2
|
| 440 |
-
- Sentence Transformers: 5.1.0
|
| 441 |
-
- Transformers: 4.57.0.dev0
|
| 442 |
-
- PyTorch: 2.8.0+cu128
|
| 443 |
-
- Accelerate: 1.10.1
|
| 444 |
-
- Datasets: 4.0.0
|
| 445 |
-
- Tokenizers: 0.22.0
|
| 446 |
|
| 447 |
## Citation
|
| 448 |
|
| 449 |
-
|
| 450 |
|
| 451 |
-
#### Sentence Transformers
|
| 452 |
```bibtex
|
| 453 |
-
@
|
| 454 |
-
|
| 455 |
-
|
| 456 |
-
|
| 457 |
-
|
| 458 |
-
|
| 459 |
-
publisher = "Association for Computational Linguistics",
|
| 460 |
-
url = "https://arxiv.org/abs/1908.10084",
|
| 461 |
}
|
| 462 |
```
|
| 463 |
|
| 464 |
-
|
| 465 |
-
## Glossary
|
| 466 |
-
|
| 467 |
-
*Clearly define terms in order to be accessible across audiences.*
|
| 468 |
-
-->
|
| 469 |
-
|
| 470 |
-
<!--
|
| 471 |
-
## Model Card Authors
|
| 472 |
-
|
| 473 |
-
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
| 474 |
-
-->
|
| 475 |
|
| 476 |
-
|
| 477 |
-
## Model Card Contact
|
| 478 |
|
| 479 |
-
|
| 480 |
-
-->
|
|
|
|
| 54 |
name: F1 Weighted
|
| 55 |
---
|
| 56 |
|
| 57 |
+
# EttinX Cross-Encoder: Natural Language Inference (NLI)
|
| 58 |
|
| 59 |
+
This cross encoder performs sequence classification for contradiction/neutral/entailment labels. This has
|
| 60 |
+
drop-in compatibility with comparable sentence transformers cross encoders.
|
| 61 |
|
| 62 |
+
To train this model, I added teacher logits to the all-nli dataset `dleemiller/all-nli-distill` from the
|
| 63 |
+
`dleemiller/ModernCE-large-nli` model. This significantly improves performance above standard training.
|
| 64 |
|
| 65 |
+
This 17m architecture is based on ModernBERT and is an excellent candidate for lightweight **CPU inference**.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
+
## Features
|
| 70 |
+
- **High performing:** Achieves **80.47%** and **86.95%** (Micro F1) on MNLI mismatched and SNLI test.
|
| 71 |
+
- **Efficient architecture:** Based on the Ettin-17m encoder design (17M parameters), offering faster inference speeds.
|
| 72 |
+
- **Extended context length:** Processes sequences up to 8192 tokens, great for LLM output evals.
|
| 73 |
|
| 74 |
+
---
|
| 75 |
|
| 76 |
+
## Performance
|
|
|
|
|
|
|
| 77 |
|
| 78 |
+
| Model | MNLI Mismatched | SNLI Test | Context Length | # Parameters |
|
| 79 |
+
|---------------------------|-------------------|--------------|----------------|----------------|
|
| 80 |
+
| `dleemiller/ModernCE-large-nli` | **0.9202** | 0.9110 | 8192 | 395M |
|
| 81 |
+
| `dleemiller/ModernCE-base-nli` | 0.9034 | 0.9025 | 8192 | 149M |
|
| 82 |
+
| `cross-encoder/deberta-v3-large` | 0.9049 | 0.9220 | 512 | 435M |
|
| 83 |
+
| `cross-encoder/deberta-v3-base` | 0.9004 | 0.9234 | 512 | 184M |
|
| 84 |
+
| `cross-encoder/nli-distilroberta-base` | 0.8398 | 0.8838 | 512 | 82M |
|
| 85 |
+
| `dleemiller/EttinX-nli-xxs` | 0.8047 | 0.8695 | 8192 | 17M |
|
| 86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
+
---
|
|
|
|
| 89 |
|
| 90 |
+
## Usage
|
| 91 |
|
| 92 |
+
To use EttinX for NLI tasks, you can load the model with the Hugging Face `sentence-transformers` library:
|
|
|
|
| 93 |
|
| 94 |
+
```python
|
| 95 |
+
from sentence_transformers import CrossEncoder
|
| 96 |
|
| 97 |
+
# Load EttinX model
|
| 98 |
+
model = CrossEncoder("dleemiller/EttinX-nli-xxs")
|
| 99 |
|
| 100 |
+
scores = model.predict([
|
| 101 |
+
('A man is eating pizza', 'A man eats something'),
|
| 102 |
+
('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')
|
| 103 |
+
])
|
| 104 |
|
| 105 |
+
# Convert scores to labels
|
| 106 |
+
label_mapping = ['contradiction', 'entailment', 'neutral']
|
| 107 |
+
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
|
| 108 |
+
# ['entailment', 'contradiction']
|
| 109 |
+
```
|
| 110 |
|
| 111 |
+
---
|
|
|
|
| 112 |
|
| 113 |
+
## Training Details
|
|
|
|
| 114 |
|
| 115 |
+
### Pretraining
|
| 116 |
+
We initialize the `` weights.
|
| 117 |
|
| 118 |
+
Details:
|
| 119 |
+
- Batch size: 512
|
| 120 |
+
- Learning rate: 1e-4
|
| 121 |
+
- **Attention Dropout:** attention dropout 0.1
|
| 122 |
|
| 123 |
+
### Fine-Tuning
|
| 124 |
+
Fine-tuning was performed on the `dleemiller/all-nli-distill` dataset.
|
| 125 |
|
| 126 |
+
### Validation Results
|
| 127 |
+
The model achieved the following test set micro f1 performance after fine-tuning:
|
| 128 |
+
- **MNLI Unmatched:** 0.8047
|
| 129 |
+
- **SNLI:** 0.8695
|
| 130 |
|
| 131 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
+
## Model Card
|
|
|
|
| 134 |
|
| 135 |
+
- **Architecture:** Ettin-encoder-17m
|
| 136 |
+
- **Fine-Tuning Data:** `dleemiller/all-nli-distill`
|
| 137 |
|
| 138 |
+
---
|
|
|
|
| 139 |
|
| 140 |
+
## Thank You
|
|
|
|
| 141 |
|
| 142 |
+
Thanks to the Johns Hopkins team for providing the ModernBERT models, and the Sentence Transformers team for their leadership in transformer encoder models.
|
| 143 |
|
| 144 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
## Citation
|
| 147 |
|
| 148 |
+
If you use this model in your research, please cite:
|
| 149 |
|
|
|
|
| 150 |
```bibtex
|
| 151 |
+
@misc{moderncenli2025,
|
| 152 |
+
author = {Miller, D. Lee},
|
| 153 |
+
title = {EttinX NLI: An NLI cross encoder model},
|
| 154 |
+
year = {2025},
|
| 155 |
+
publisher = {Hugging Face Hub},
|
| 156 |
+
url = {https://huggingface.co/dleemiller/EttinX-nli-xxs},
|
|
|
|
|
|
|
| 157 |
}
|
| 158 |
```
|
| 159 |
|
| 160 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
|
| 162 |
+
## License
|
|
|
|
| 163 |
|
| 164 |
+
This model is licensed under the [MIT License](LICENSE).
|
|
|