SentenceTransformer based on allenai/scibert_scivocab_uncased

This is a sentence-transformers model finetuned from allenai/scibert_scivocab_uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: allenai/scibert_scivocab_uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ML5562/fine-tuned-scibert_scivocab_uncased-all-json-M1_f16")
# Run inference
sentences = [
    'Select which statements are true regarding SCFGs.A penalty will be applied for any incorrect answers.',
    'The true statements regarding Stochastic Context-Free Grammars (SCFGs) are:\n\nA: The sum over all the probabilities of the rules of a SCFG that share the same left-hand side should sum up to 1, which is a fundamental property of probabilistic grammars. \nB: The probabilities of lexical rules of a SCFG correspond to emission probabilities of Hidden Markov Models (HMMs) for Part-of-Speech (PoS) tagging, indicating a similarity in how both types of models handle probabilities associated with observed events. \n\nThe other statements either misrepresent SCFG properties or are incorrect.',
    'The true statements regarding SCFGs are A and B. \n\nReason: A is true because the probabilities of rules with the same left-hand side must sum to 1 to maintain valid probability distributions. B is also true because lexical rules in SCFGs represent the probabilities of generating terminal symbols, analogous to emission probabilities in Hidden Markov Models (HMMs) used for Part-of-Speech (PoS) tagging. The other statements are either incorrect or not universally applicable to all SCFGs.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.6229

Training Details

Training Dataset

Unnamed Dataset

  • Size: 19,392 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 5 tokens
    • mean: 91.5 tokens
    • max: 512 tokens
    • min: 3 tokens
    • mean: 348.13 tokens
    • max: 512 tokens
    • min: 3 tokens
    • mean: 318.16 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    In class, we saw Karger's beautiful randomized algorithm for finding a min-cut in an undirected graph $G=(V,E)$ with $n = V $ vertices. Each iteration of Karger's algorithm can be implemented in time $O(n^2)$, and if repeated $\Theta(n^2 \log n)$ times, Karger's algorithm returns a min-cut with probability at least $1-1/n$. However, this leads to the often prohibitively large running time of $O(n^4 \log n)$. Karger and Stein made a crucial observation that allowed them to obtain a much faster algorithm for min-cut: the Karger-Stein algorithm runs in time $O(n^2 \log^3 n)$ and finds a min-cut with probability at least $1-1/n$. Explain in a couple of sentences the main idea that allowed Karger and Stein to modify Karger's algorithm into the much faster Karger-Stein algorithm. In other words, what are the main differences between the two algorithms?
    If we need to create a channel that protects confidentiality and we have at our disposal a channel that protects integrity and authenticity, we need to use Answer: 3

    To protect confidentiality, we need to ensure that the information being sent over the channel cannot be read by unauthorized parties. The options provided suggest different methods that can be employed for security:

    1. Symmetric key encryption: This method encrypts data using the same key for both encryption and decryption. While it does provide confidentiality, the question specifies that we already have a channel that protects integrity and authenticity, which might imply that we are looking for a method that can be integrated with that existing channel.

    2. Message authentication codes (MACs): These are used to ensure the integrity and authenticity of a message but do not provide confidentiality. Therefore, this option does not address the need for confidentiality.

    3. Public key encryption: This method uses a pair of keys (public and private) for encryption and decryption. It allows for secure transmission of data, ensuring confidentiality, especially w...
    Answer: 3

    To protect confidentiality, public key encryption is necessary as it allows for secure data transmission while ensuring that only authorized parties can decrypt the message. This method complements the existing channel that protects integrity and authenticity, thereby addressing the confidentiality requirement effectively.
    For a $n$-bit block cipher with $k$-bit key, given a plaintext-ciphertext pair, a key exhaustive search has an average number of trials of \dots To determine the average number of trials required for a key exhaustive search on a block cipher, we need to consider the following:

    1. Key Space: A block cipher with a $k$-bit key has a total of $2^k$ possible keys.
    2. Exhaustive Search: In an exhaustive search, one tries each possible key until the correct one is found. On average, the correct key will be found after trying half of the total keys.

    Therefore, in a key exhaustive search:

    - The average number of trials is given by:

    [
    \text{Average Trials} = \frac{2^k}{2} = 2^{k-1}
    ]

    However, in terms of options provided, we're looking for what corresponds to the average trials.

    3. Since the options include $2^k$ and $\frac{2^k + 1}{2}$, we consider that on average, we would try about half of the keyspace, which can be represented as $\frac{2^k + 1}{2}$ for approximation in the context of average calculations.

    Thus, the correct answer is:

    Answer: $\frac{2^k + 1}{2}$
    To determine the average number of trials for a key exhaustive search in the context of a block cipher, we need to analyze the options given and the definitions involved.

    1. In an $n$-bit block cipher, the number of possible keys is $2^k$ where $k$ is the bit length of the key.

    2. An exhaustive search means testing every possible key until the correct one is found.

    3. Since there are $2^k$ possible keys, in the worst case, we would need to try all $2^k$ keys. However, on average, if you were to randomly guess a key, you would expect to find the correct key after trying about half of all possible keys.

    Thus, the average number of trials for an exhaustive search would be:

    [
    \text{Average trials} = \frac{2^k}{2} = \frac{2^k + 1}{2}
    ]

    This matches one of the options provided.

    Final Answer: 3
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • num_train_epochs: 20
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss val-eval_cosine_accuracy
0.1031 500 4.7355 0.5606
0.2063 1000 4.5245 0.5852
0.3094 1500 4.4665 0.5988
0.4125 2000 4.6664 0.5545
0.5157 2500 4.7732 0.5961
0.6188 3000 4.3502 0.5827
0.7219 3500 4.5098 0.5821
0.8251 4000 4.3916 0.5969
0.9282 4500 4.5026 0.5965
1.0 4848 - 0.6106
1.0314 5000 4.3997 0.6118
1.1345 5500 4.131 0.5992
1.2376 6000 4.005 0.6038
1.3408 6500 4.0346 0.5990
1.4439 7000 4.1737 0.5959
1.5470 7500 4.256 0.6048
1.6502 8000 4.035 0.6122
1.7533 8500 4.0693 0.6083
1.8564 9000 4.2146 0.5978
1.9596 9500 4.3037 0.6229

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
6
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ML5562/fine-tuned-scibert_scivocab_uncased-all-json-M1_f16

Finetuned
(85)
this model

Evaluation results