Training supported
Qwen3 embedding models can be fine-tuned by SWIFT:
pip install ms-swift -U
INFONCE_MASK_FAKE_NEGATIVE=true \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
NPROC_PER_NODE=8 \
swift sft \
--model Qwen/Qwen3-Embedding-0.6B \
--task_type embedding \
--model_type qwen3_emb \
--train_type full \
--dataset sentence-transformers/stsb:positive \
--split_dataset_ratio 0.05 \
--eval_strategy steps \
--output_dir output \
--eval_steps 20 \
--num_train_epochs 5 \
--save_steps 70 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 4 \
--learning_rate 6e-6 \
--loss_type infonce \
--label_names labels \
--dataloader_drop_last true \
--deepspeed zero3
We uses --loss_type infonce
, which is also the loss type used in training the original model. Other loss types can also be used, such as --loss_type cosine_similarity. InfoNCE loss is a contrastive learning loss. In the above script, we default to treating different samples as negative examples, which can be controlled through the INFONCE_USE_BATCH
environment variable, which defaults to True
. The above script has an additional environment variable: INFONCE_MASK_FAKE_NEGATIVE=true
, which ignores negative examples with excessively high similarity values (e.g., negative examples with similarity higher than positive example similarity + 0.1), preventing interference from dataset duplication or false negative issues during training.
The dataset format corresponding to InfoNCE loss is as follows:
{"query": "sentence1", "response": "sentence1-pos", "rejected_response": ["sentence1-neg1", "sentence1-neg2"]}
The negative examples of this sample and the positive and negative examples of other samples will all be used as negative examples for this sample.
Documentation here:
https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
你好,请问在指定数据集为sentence-transformers/stsb:positive后,预处理后的数据应该是不包括“ rejected_response”字段的。
也就是其实模型训练时,其实没有hard nagetive,都是batch内的负样本,这么理解正确吗?