TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

allenai/Llama-3.1-Tulu-3-70B - GGUF

This repo contains GGUF format model files for allenai/Llama-3.1-Tulu-3-70B.

The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4242.

Prompt template

<|system|>
{system_prompt}
<|user|>
{prompt}
<|assistant|>

Model file specification

Filename Quant type File Size Description
Llama-3.1-Tulu-3-70B-Q2_K.gguf Q2_K 26.375 GB smallest, significant quality loss - not recommended for most purposes
Llama-3.1-Tulu-3-70B-Q3_K_S.gguf Q3_K_S 30.912 GB very small, high quality loss
Llama-3.1-Tulu-3-70B-Q3_K_M.gguf Q3_K_M 34.268 GB very small, high quality loss
Llama-3.1-Tulu-3-70B-Q3_K_L.gguf Q3_K_L 37.141 GB small, substantial quality loss
Llama-3.1-Tulu-3-70B-Q4_0.gguf Q4_0 39.970 GB legacy; small, very high quality loss - prefer using Q3_K_M
Llama-3.1-Tulu-3-70B-Q4_K_S.gguf Q4_K_S 40.347 GB small, greater quality loss
Llama-3.1-Tulu-3-70B-Q4_K_M.gguf Q4_K_M 42.520 GB medium, balanced quality - recommended
Llama-3.1-Tulu-3-70B-Q5_0.gguf Q5_0 48.658 GB legacy; medium, balanced quality - prefer using Q4_K_M
Llama-3.1-Tulu-3-70B-Q5_K_S.gguf Q5_K_S 48.658 GB large, low quality loss - recommended
Llama-3.1-Tulu-3-70B-Q5_K_M.gguf Q5_K_M 49.950 GB large, very low quality loss - recommended
Llama-3.1-Tulu-3-70B-Q8_0 Q6_K 74.975 GB very large, extremely low quality loss
Llama-3.1-Tulu-3-70B-Q6_K Q8_0 57.888 GB very large, extremely low quality loss - not recommended

Downloading instruction

Command line

Firstly, install Huggingface Client

pip install -U "huggingface_hub[cli]"

Then, downoad the individual model file the a local directory

huggingface-cli download tensorblock/Llama-3.1-Tulu-3-70B-GGUF --include "Llama-3.1-Tulu-3-70B-Q2_K.gguf" --local-dir MY_LOCAL_DIR

If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:

huggingface-cli download tensorblock/Llama-3.1-Tulu-3-70B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
Downloads last month
4
GGUF
Model size
70.6B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for tensorblock/Llama-3.1-Tulu-3-70B-GGUF

Dataset used to train tensorblock/Llama-3.1-Tulu-3-70B-GGUF