These are some GGUFs we created to make this model more portable. You can run in ollama via the following:

create a file named Modelfile (no extension) and put the following: FROM {gguf file}.gguf then exit, and run in terminal: cd {folder with gguf model} ollama create -f Modelfile

Downloads last month
4
GGUF
Model size
2.61B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

2-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support