Qwen3-Embedding-0.6B
This model is an ONNX conversion of the original Qwen3-Embedding model tailored for Sinequa usage.
Usage
This model is an Instruct Retriever, it's LLM Based. So you can give instruction at query time that better suits your needs. Only the query prefix should be added.
Packaging
Download this folder and use Package from local
in the Sinequa Runnable Model Wizard.
Note: No MRL Cutoff is specified, so the embedding dimension out of this vectorizer will be 1024.
Requirements
- Minimal Sinequa version: 11.12
Example of a payload in the Runnable Model API
{
"inputs": [
{
"text": "What is the capital of China?"
},
{
"text": "Explain Gravity"
}
],
"options": {
"context": "query",
"passagePrefix": "",
"queryPrefix": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:"
}
}
Notice the instruction stating the model is given a "web search query". You could modify it to make it more domain-specific.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support