This model was converted to GGUF format from manu/bge-m3-custom-fr
using llama.cpp.
Refer to the original model card for more details on the model.
You can run the model as an embedding model using llama-server.
For installation, you can follow the instructions from the repository !
./build/bin/llama-server -m bge-m3-custom-fr_q8_0.gguf --embedding --pooling mean -ub 8192 --port 8001 --batch-size 4096
- Downloads last month
- 26
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for ktoprakucar/bge-m3-custom-fr-Q8-GGUF
Base model
manu/bge-m3-custom-fr