SentenceTransformer based on Qwen/Qwen3-0.6B

This is a sentence-transformers model finetuned from Qwen/Qwen3-0.6B. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Qwen/Qwen3-0.6B
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen3Model 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'what county is neptune city nj',
    'Neptune City, NJ. Neptune City is a borough in Monmouth County, New Jersey, United States. As of the 2010 United States Census, the borough population was 4,869. The Borough of Neptune City was incorporated on October 4, 1881, based on a referendum held on March 19, 1881.',
    'Neptune City, NJ. Sponsored Topics. Neptune City is a borough in Monmouth County, New Jersey, United States. As of the 2010 United States Census, the borough population was 4,869. The Borough of Neptune City was incorporated on October 4, 1881, based on a referendum held on March 19, 1881.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 17,048,032 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 2 tokens
    • mean: 7.16 tokens
    • max: 33 tokens
    • min: 25 tokens
    • mean: 84.34 tokens
    • max: 243 tokens
    • min: 16 tokens
    • mean: 81.56 tokens
    • max: 300 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    what county is nettles island fl Nettles Island. Nettles Island in Hutchinson Island Florida. A development of close to 1300 lots with anything from trailer pads to updated concrete block homes on a mostly man made island that juts out into the Indian River on Hutchinson Island in Saint Lucie County FL. Though, the official address for Nettles Island is in Jensen Beach. Fleming Island is an unincorporated community and census-designated place in Clay County, Florida, United States. It is located 21 miles southwest of downtown Jacksonville, on the western side of the St. Johns River, off US 17. As of the 2010 census the Fleming Island CDP had a population of 27,126. Fleming Island's ZIP code became 32003 in 2004, giving it a different code from Orange Park, the incorporated town to the north.
    what time of day to take estrogen Time of day to take Estrogen. Hi. I think we all may find different times of day are better for each of our needs. I actually feel much better using my estrogen twice a day. I use half in the morning and half in the evening. I am using a different estrogen than you and am able to split my dose. I'm glad to hear that you have been feeling very good on your current hormone therapy Hopefully just a small adjustment may be needed as our estrogen needs can change overtime. Eating fresh carrots or drinking a cup of fresh carrot juice 2-3 times a day is a wonderful way to bring on your period sooner than expected. Carrots contain high amounts of carotene, which encourages the production of estrogen. The more estrogen you have in your body, the more your period desires to arrive.
    what effects does nicotine have on your body Nicotine also activates areas of the brain that are involved in producing feelings of pleasure and reward. Recently, scientists discovered that nicotine raises the levels of a neurotransmitter called dopamine in the parts of the brain that produce feelings of pleasure and reward. The action of nicotine in the body is very complicated. It is a mild stimulant which has an effect upon the heart and brain. It stimulates the central nervous system causing irregular heartbeat and blood pressure, induces vomiting and diarrhea, and first stimulates, then inhibits glandular secretions.icotine seems to provide both a stimulant and a depressant effect, and it is likely that the effect it has at any time is determined by the mood of the user, the environment and the circumstances of use.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • num_train_epochs: 1
  • max_steps: 10000
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: 10000
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0002 500 1.7342
0.0005 1000 1.7194
0.0007 1500 1.6713
0.0009 2000 1.5885
0.0012 2500 1.4152
0.0014 3000 1.3052
0.0016 3500 1.1763
0.0019 4000 1.0714
0.0021 4500 1.0235
0.0023 5000 0.9484
0.0026 5500 0.9207
0.0028 6000 0.9076
0.0031 6500 0.8736
0.0033 7000 0.8671
0.0035 7500 0.8621
0.0038 8000 0.8414
0.0040 8500 0.8228
0.0042 9000 0.8101
0.0045 9500 0.8339
0.0047 10000 0.7968

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 4.0.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
121
Safetensors
Model size
596M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chungimungi/Qwen3-0.6B-ms-marco

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(35)
this model

Dataset used to train chungimungi/Qwen3-0.6B-ms-marco