SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v_1_0_5")
# Run inference
sentences = [
    '科目:タイル。名称:床磁器質タイル。',
    '科目:ユニット及びその他。名称:#F薬渡し窓口カウンター。',
    '科目:ユニット及びその他。名称:F-#c教員棚。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 12,611 training samples
  • Columns: sentence and label
  • Approximate statistics based on the first 1000 samples:
    sentence label
    type string int
    details
    • min: 11 tokens
    • mean: 18.16 tokens
    • max: 54 tokens
    • 0: ~0.30%
    • 1: ~0.30%
    • 2: ~0.30%
    • 3: ~0.30%
    • 4: ~0.30%
    • 5: ~0.30%
    • 6: ~0.30%
    • 7: ~0.30%
    • 8: ~0.30%
    • 9: ~0.30%
    • 10: ~0.30%
    • 11: ~0.30%
    • 12: ~1.10%
    • 13: ~0.30%
    • 14: ~0.30%
    • 15: ~0.30%
    • 16: ~0.30%
    • 17: ~0.30%
    • 18: ~0.30%
    • 19: ~0.30%
    • 20: ~0.30%
    • 21: ~0.30%
    • 22: ~0.30%
    • 23: ~0.40%
    • 24: ~0.30%
    • 25: ~0.30%
    • 26: ~0.30%
    • 27: ~0.90%
    • 28: ~0.30%
    • 29: ~0.40%
    • 30: ~0.30%
    • 31: ~1.10%
    • 32: ~0.30%
    • 33: ~0.30%
    • 34: ~0.30%
    • 35: ~0.30%
    • 36: ~0.30%
    • 37: ~0.30%
    • 38: ~0.30%
    • 39: ~0.30%
    • 40: ~0.30%
    • 41: ~0.30%
    • 42: ~0.30%
    • 43: ~0.30%
    • 44: ~0.30%
    • 45: ~0.30%
    • 46: ~0.30%
    • 47: ~0.30%
    • 48: ~0.30%
    • 49: ~0.40%
    • 50: ~0.30%
    • 51: ~0.30%
    • 52: ~0.30%
    • 53: ~0.60%
    • 54: ~0.70%
    • 55: ~0.30%
    • 56: ~0.30%
    • 57: ~0.30%
    • 58: ~0.30%
    • 59: ~0.30%
    • 60: ~0.30%
    • 61: ~0.30%
    • 62: ~0.30%
    • 63: ~0.30%
    • 64: ~0.30%
    • 65: ~0.30%
    • 66: ~0.30%
    • 67: ~0.30%
    • 68: ~0.50%
    • 69: ~0.30%
    • 70: ~0.30%
    • 71: ~0.30%
    • 72: ~0.30%
    • 73: ~0.30%
    • 74: ~0.30%
    • 75: ~0.30%
    • 76: ~0.30%
    • 77: ~0.30%
    • 78: ~0.30%
    • 79: ~0.30%
    • 80: ~0.30%
    • 81: ~0.30%
    • 82: ~0.30%
    • 83: ~0.30%
    • 84: ~0.80%
    • 85: ~0.60%
    • 86: ~0.30%
    • 87: ~0.30%
    • 88: ~0.30%
    • 89: ~0.30%
    • 90: ~0.30%
    • 91: ~0.30%
    • 92: ~0.30%
    • 93: ~0.50%
    • 94: ~0.30%
    • 95: ~0.30%
    • 96: ~0.30%
    • 97: ~0.30%
    • 98: ~0.80%
    • 99: ~0.60%
    • 100: ~0.50%
    • 101: ~0.30%
    • 102: ~0.30%
    • 103: ~16.50%
    • 104: ~0.30%
    • 105: ~0.30%
    • 106: ~0.30%
    • 107: ~0.30%
    • 108: ~0.30%
    • 109: ~0.30%
    • 110: ~0.30%
    • 111: ~0.30%
    • 112: ~0.50%
    • 113: ~0.30%
    • 114: ~0.30%
    • 115: ~0.30%
    • 116: ~0.30%
    • 117: ~0.30%
    • 118: ~0.30%
    • 119: ~0.30%
    • 120: ~0.30%
    • 121: ~0.70%
    • 122: ~0.30%
    • 123: ~0.30%
    • 124: ~0.30%
    • 125: ~0.40%
    • 126: ~2.10%
    • 127: ~2.10%
    • 128: ~0.30%
    • 129: ~0.30%
    • 130: ~0.50%
    • 131: ~0.50%
    • 132: ~0.50%
    • 133: ~0.40%
    • 134: ~0.30%
    • 135: ~0.30%
    • 136: ~0.30%
    • 137: ~0.80%
    • 138: ~0.30%
    • 139: ~0.30%
    • 140: ~0.30%
    • 141: ~0.30%
    • 142: ~0.30%
    • 143: ~0.30%
    • 144: ~0.30%
    • 145: ~0.30%
    • 146: ~0.30%
    • 147: ~0.30%
    • 148: ~0.30%
    • 149: ~0.30%
    • 150: ~0.50%
    • 151: ~0.30%
    • 152: ~0.40%
    • 153: ~0.30%
    • 154: ~0.30%
    • 155: ~0.30%
    • 156: ~0.30%
    • 157: ~0.30%
    • 158: ~0.30%
    • 159: ~0.30%
    • 160: ~0.30%
    • 161: ~0.30%
    • 162: ~0.30%
    • 163: ~0.30%
    • 164: ~0.40%
    • 165: ~0.30%
    • 166: ~0.30%
    • 167: ~0.30%
    • 168: ~0.30%
    • 169: ~0.30%
    • 170: ~0.30%
    • 171: ~0.70%
    • 172: ~0.30%
    • 173: ~0.30%
    • 174: ~0.30%
    • 175: ~1.30%
    • 176: ~0.30%
    • 177: ~0.40%
    • 178: ~0.30%
    • 179: ~0.30%
    • 180: ~0.30%
    • 181: ~1.50%
    • 182: ~0.30%
    • 183: ~0.30%
    • 184: ~0.30%
    • 185: ~0.30%
    • 186: ~0.30%
    • 187: ~0.30%
    • 188: ~0.30%
    • 189: ~1.60%
    • 190: ~0.30%
    • 191: ~0.30%
    • 192: ~7.20%
    • 193: ~0.30%
    • 194: ~1.00%
    • 195: ~0.30%
    • 196: ~0.30%
    • 197: ~0.30%
    • 198: ~1.50%
  • Samples:
    sentence label
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
  • Loss: sentence_transformer_lib.custom_batch_all_trip_loss.CustomBatchAllTripletLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • num_train_epochs: 250
  • warmup_ratio: 0.2
  • fp16: True
  • batch_sampler: group_by_label

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 250
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: group_by_label
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss
2.24 50 0.0583
4.48 100 0.0626
6.72 150 0.0638
9.08 200 0.0659
11.32 250 0.0629
13.56 300 0.0608
15.8 350 0.0607
18.16 400 0.0584
20.4 450 0.0577
22.64 500 0.0566
24.88 550 0.0594
27.24 600 0.0552
29.48 650 0.0512
31.72 700 0.053
34.08 750 0.0538
36.32 800 0.0506
38.56 850 0.054
40.8 900 0.0498
43.16 950 0.0538
45.4 1000 0.0491
47.64 1050 0.0445
49.88 1100 0.0466
52.24 1150 0.0458
54.48 1200 0.0507
56.72 1250 0.0408
59.08 1300 0.0462
61.32 1350 0.0443
63.56 1400 0.0392
65.8 1450 0.0389
68.16 1500 0.0455
70.4 1550 0.049
72.64 1600 0.0435
74.88 1650 0.0416
77.24 1700 0.041
79.48 1750 0.0443
81.72 1800 0.0423
84.08 1850 0.0457
86.32 1900 0.0375
88.56 1950 0.0428
90.8 2000 0.037
93.16 2050 0.0441
95.4 2100 0.0382
97.64 2150 0.0424
99.88 2200 0.041
1.6667 50 0.0381
3.6111 100 0.0373
5.5556 150 0.0381
7.5 200 0.0394
9.4444 250 0.0399
11.3889 300 0.0405
13.3333 350 0.0409
15.2778 400 0.0408
17.2222 450 0.0404
19.1667 500 0.0396
21.1111 550 0.038
23.0556 600 0.0346
24.7222 650 0.0381
26.6667 700 0.0356
28.6111 750 0.0344
30.5556 800 0.0344
32.5 850 0.0365
34.4444 900 0.0354
36.3889 950 0.0324
38.3333 1000 0.0301
40.2778 1050 0.038
42.2222 1100 0.0351
44.1667 1150 0.0344
46.1111 1200 0.0339
48.0556 1250 0.0358
49.7222 1300 0.0312
51.6667 1350 0.0278
53.6111 1400 0.0342
55.5556 1450 0.0291
57.5 1500 0.03
59.4444 1550 0.03
61.3889 1600 0.0303
63.3333 1650 0.0339
65.2778 1700 0.0342
67.2222 1750 0.0283
69.1667 1800 0.0271
71.1111 1850 0.0327
73.0556 1900 0.0296
74.7222 1950 0.0295
76.6667 2000 0.0259
78.6111 2050 0.0296
80.5556 2100 0.0256
82.5 2150 0.0271
84.4444 2200 0.0287
86.3889 2250 0.028
88.3333 2300 0.0275
90.2778 2350 0.0294
92.2222 2400 0.0243
94.1667 2450 0.0275
96.1111 2500 0.0258
98.0556 2550 0.0215
99.7222 2600 0.0252
101.6667 2650 0.029
103.6111 2700 0.0265
105.5556 2750 0.0258
107.5 2800 0.0222
109.4444 2850 0.0263
111.3889 2900 0.0266
113.3333 2950 0.0211
115.2778 3000 0.0251
117.2222 3050 0.0224
119.1667 3100 0.0204
121.1111 3150 0.0226
123.0556 3200 0.025
124.7222 3250 0.0214
126.6667 3300 0.0237
128.6111 3350 0.0287
130.5556 3400 0.0229
132.5 3450 0.0171
134.4444 3500 0.0215
136.3889 3550 0.0236
138.3333 3600 0.0238
140.2778 3650 0.0168
142.2222 3700 0.0281
144.1667 3750 0.0247
146.1111 3800 0.02
148.0556 3850 0.0225
149.7222 3900 0.0189
151.6667 3950 0.0178
153.6111 4000 0.0174
155.5556 4050 0.0165
157.5 4100 0.0197
159.4444 4150 0.0226
161.3889 4200 0.0126
163.3333 4250 0.0224
165.2778 4300 0.0174
167.2222 4350 0.0214
169.1667 4400 0.0159
171.1111 4450 0.0121
173.0556 4500 0.0194
174.7222 4550 0.0216
176.6667 4600 0.0193
178.6111 4650 0.0157
180.5556 4700 0.0159
182.5 4750 0.016
184.4444 4800 0.0182
186.3889 4850 0.0181
188.3333 4900 0.0164
190.2778 4950 0.0204
192.2222 5000 0.0188
194.1667 5050 0.0155
196.1111 5100 0.0166
198.0556 5150 0.0165
199.7222 5200 0.0111
201.6667 5250 0.0181
203.6111 5300 0.0196
205.5556 5350 0.0164
207.5 5400 0.0125
209.4444 5450 0.0168
211.3889 5500 0.0174
213.3333 5550 0.0144
215.2778 5600 0.0169
217.2222 5650 0.019
219.1667 5700 0.0178
221.1111 5750 0.014
223.0556 5800 0.0154
224.7222 5850 0.0151
226.6667 5900 0.0105
228.6111 5950 0.013
230.5556 6000 0.0152
232.5 6050 0.0138
234.4444 6100 0.0133
236.3889 6150 0.015
238.3333 6200 0.0119
240.2778 6250 0.0185
242.2222 6300 0.0104
244.1667 6350 0.0155
246.1111 6400 0.0135
248.0556 6450 0.0141
249.7222 6500 0.0168

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CustomBatchAllTripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
28
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support