---
library_name: transformers
tags:
- mergekit
- merge
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: sequelbox/Llama3.1-8B-PlumChat
license: llama3.1
model-index:
- name: Llama3.1-8B-PlumChat
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-Shot)
      type: Winogrande
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 72.22
      name: acc
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 42.43
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 13.94
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 3.1
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 2.01
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 4.77
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 12.52
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
      name: Open LLM Leaderboard
---

# Triangle104/Llama3.1-8B-PlumChat-Q5_K_S-GGUF
This model was converted to GGUF format from [`sequelbox/Llama3.1-8B-PlumChat`](https://huggingface.co/sequelbox/Llama3.1-8B-PlumChat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sequelbox/Llama3.1-8B-PlumChat) for more details on the model.

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q5_K_S-GGUF --hf-file llama3.1-8b-plumchat-q5_k_s.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q5_K_S-GGUF --hf-file llama3.1-8b-plumchat-q5_k_s.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q5_K_S-GGUF --hf-file llama3.1-8b-plumchat-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/Llama3.1-8B-PlumChat-Q5_K_S-GGUF --hf-file llama3.1-8b-plumchat-q5_k_s.gguf -c 2048
```