File size: 713 Bytes
dd87979
 
 
 
 
 
 
 
 
 
c61c6d1
 
 
 
f139af4
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: apache-2.0
tags:
- quantized
- 8-bit
- GGUF
language:
- fr
base_model:
- manu/bge-m3-custom-fr
---

This model was converted to GGUF format from [`manu/bge-m3-custom-fr`](https://huggingface.co/manu/bge-m3-custom-fr) using llama.cpp.
Refer to the [original model card](https://huggingface.co/manu/bge-m3-custom-fr) for more details on the model.


You can run the model as an embedding model using llama-server.

For installation, you can follow the instructions from the [repository](https://github.com/ggml-org/llama.cpp/blob/master/examples/server/README.md#build) !

```
./build/bin/llama-server -m bge-m3-custom-fr_q8_0.gguf --embedding --pooling mean -ub 8192 --port 8001 --batch-size 4096
```