MamayLM-Gemma-2-9B-IT-v0.1-GGUF

MamayLM is distributed under Gemma Terms of Use.

This repo contains the GGUF format model files for INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1.

Quick Start using Python

Install the required package:

pip install llama-cpp-python

Example chat completion:

from llama_cpp import Llama

llm = Llama(
    model_path="path/to/your/model.gguf",
    n_ctx=8192,
    penalize_nl=False
)

messages = [{"role": "user", "content": "Хто такий Козак Мамай??"}]
response = llm.create_chat_completion(
    messages=messages,
    max_tokens=2048,        # Choose maximum generated tokens
    temperature=0.1,
    top_p=0.9,
    repeat_penalty=1.0,
    stop=["<eos>", "<end_of_turn>"]
)

Example normal completion:

from llama_cpp import Llama

llm = Llama(
    model_path="path/to/your/model.gguf",
    n_ctx=8192,
    penalize_nl=False
)

prompt = "<start_of_turn>user\nХто такий Козак Мамай?<end_of_turn>\n<start_of_turn>model\n"
response = llm(
    prompt,
    max_tokens=2048,        # Choose maximum generated tokens
    temperature=0.1,
    top_p=0.9,
    repeat_penalty=1.0,
    stop=["<eos>","<end_of_turn>"]
)
Downloads last month
152
GGUF
Model size
9.24B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1-GGUF

Base model

google/gemma-2-9b
Quantized
(49)
this model

Collection including INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1-GGUF