Description
This repository contains GGUF format model files for Meta LLama 3 Instruct.
Prompt template
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Same as here: https://ollama.com/library/llama3:instruct/blobs/8ab4849b038c
Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Then, you can target the specific file you need:
huggingface-cli download liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF --include "meta-llama-3-8b-instruct.Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF
Base model
meta-llama/Meta-Llama-3-8B-Instruct