LPN64/granite-3.2-8b-lora-rag-query-rewrite-F16-GGUF

This LoRA adapter was converted to GGUF format from ibm-granite/granite-3.2-8b-lora-rag-query-rewrite via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora granite-3.2-8b-lora-rag-query-rewrite-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora granite-3.2-8b-lora-rag-query-rewrite-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
14
GGUF
Model size
23.6M params
Architecture
granite
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LPN64/granite-3.2-8b-lora-rag-query-rewrite-F16-GGUF

Quantized
(1)
this model