Mismatch output score
Hi, after trying the demo. I got the similarities of tensor([[0.4989, 0.7087, 0.5910, 0.5932]])
rather than tensor([[0.3011, 0.6359, 0.4930, 0.4889]])
, any ideas?
>>> sentence_transformers.__version__
'5.1.1'
>>> torch.__version__
'2.6.0+cu118'
>>> transformers.__version__
'4.56.2'
me too. its a bug, because onnx model is correct.
Also, may I ask why there are two linear, Linear(768,3072) + Identity activation + Linear(3072,768), on top of model? Since it is an identity activation, why not merge these two matrics into an Linear(768,768) ?
Hi @Hannibal046 ,
Thanks for reaching out to us, the distribution of the similarity scores might not exact numbers every time when we run. However the distribution of the score is maintenance
the same pattern, which means the most relevant document will be assigned with highest similarity score among the given list of documents.
I have ran the same model card code in my local and could able to see the similar level of similarity scores for the given documents. Please find the following similarity score from my local.
Thanks.
pip install git+https://github.com/huggingface/[email protected]
pip install sentence-transformers>=5.0.0
solved.
from: https://huggingface.co/google/embeddinggemma-300m/discussions/8#68c18ee960549aa1b2f1affa