metadata
base_model: speakleash/Bielik-1.5B-v3.0-Instruct
license: apache-2.0
model_creator: speakleash
model_name: Bielik-1.5B-v3.0-Instruct
quantized_by: Second State Inc.
language:
- pl
library_name: transformers
inference:
parameters:
temperature: 0.4
Bielik-1.5B-v3.0-Instruct-GGUF
Original Model
speakleash/Bielik-1.5B-v3.0-Instruct
Run with LlamaEdge
LlamaEdge version: v0.18.3
Prompt template
Prompt type:
chatml
Prompt string
<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
Context size:
8192
Run as LlamaEdge service
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Bielik-1.5B-v3.0-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --model-name Bielik-1.5B-v3.0-Instruct \ --prompt-template chatml \ --ctx-size 8192
Run as LlamaEdge command app
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Bielik-1.5B-v3.0-Instruct-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template chatml \ --ctx-size 8192
Quantized with llama.cpp b5201