Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ngxson
/
Llama-4-Maverick-17B-128E-Instruct-Q2_K-GGUF
like
0
GGUF
conversational
License:
llama4
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Llama-4-Maverick-17B-128E-Instruct-Q2_K-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
ngxson
HF Staff
Upload folder using huggingface_hub
3be7b1f
verified
7 days ago
.gitattributes
1.9 kB
Upload folder using huggingface_hub
7 days ago
Llama-4-Maverick-17B-128E-Instruct-Q2_K-00001-of-00004.gguf
47.6 GB
LFS
Upload folder using huggingface_hub
7 days ago
Llama-4-Maverick-17B-128E-Instruct-Q2_K-00002-of-00004.gguf
48.1 GB
LFS
Upload folder using huggingface_hub
7 days ago
Llama-4-Maverick-17B-128E-Instruct-Q2_K-00003-of-00004.gguf
48.1 GB
LFS
Upload folder using huggingface_hub
7 days ago
Llama-4-Maverick-17B-128E-Instruct-Q2_K-00004-of-00004.gguf
1.78 GB
LFS
Upload folder using huggingface_hub
7 days ago
README.md
156 Bytes
initial commit
7 days ago