Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf

This model was converted to GGUF format from GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct using llama.cpp. Refer to the original model card for more details on the model.

Use with llama.cpp

CLI:

llama-cli --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here"

Server:

llama-server --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048

Model Details

Downloads last month
1
GGUF
Model size
9.24B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf