Impulse2000/multilingual-e5-large-instruct-GGUF

This model was converted to GGUF format from intfloat/multilingual-e5-large-instruct using llama.cpp via its 'convert_hf_to_gguf.py' script. Refer to the original model card for more details on the model.

Downloads last month
159
GGUF
Model size
559M params
Architecture
bert

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Impulse2000/multilingual-e5-large-instruct-GGUF

Quantized
(8)
this model

Evaluation results