GGUF

This is a llamacpp gguf converted model from the eurollm-9b model.

This is the original model: https://huggingface.co/blog/eurollm-team/eurollm-9b

If you need more information about the original project: https://huggingface.co/utter-project

Just loaded the gguf model into lm-studio (https://lmstudio.ai/) and sent a prompt in portuguese: image/png

Downloads last month
10
GGUF
Model size
9.15B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for safedev/EuroLLM-9B-F16-GGUF

Quantized
(10)
this model