Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ikawrakow
/
various-2bit-sota-gguf
like
79
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
13
Deploy
Use this model
main
various-2bit-sota-gguf
1 contributor
History:
13 commits
ikawrakow
Adding Nous-Hermes 2.31 bpw quantized models
6d5bc07
about 1 year ago
.gitattributes
Safe
1.56 kB
Adding first set of models
about 1 year ago
README.md
Safe
384 Bytes
Update README.md
about 1 year ago
llama-v2-13b-2.17bpw.gguf
3.54 GB
LFS
Adding first set of models
about 1 year ago
llama-v2-13b-2.39bpw.gguf
3.89 GB
LFS
Adding 2.31-bpw base quantized models
about 1 year ago
llama-v2-70b-2.12bpw.gguf
18.3 GB
LFS
Adding more
about 1 year ago
llama-v2-70b-2.36bpw.gguf
20.3 GB
LFS
Adding 2.31-bpw base quantized models
about 1 year ago
llama-v2-7b-2.20bpw.gguf
1.85 GB
LFS
Adding first set of models
about 1 year ago
llama-v2-7b-2.42bpw.gguf
2.03 GB
LFS
Adding 2.31-bpw base quantized models
about 1 year ago
mistral-7b-2.20bpw.gguf
1.99 GB
LFS
Adding first set of models
about 1 year ago
mistral-7b-2.43bpw.gguf
2.2 GB
LFS
Adding 2.31-bpw base quantized models
about 1 year ago
mistral-instruct-7b-2.43bpw.gguf
2.2 GB
LFS
Adding Mistral instruct models
about 1 year ago
mixtral-8x7b-2.10bpw.gguf
12.3 GB
LFS
Adding Mixtral-8x7b
about 1 year ago
mixtral-8x7b-2.34bpw.gguf
13.7 GB
LFS
Adding 2.31-bpw base quantized models
about 1 year ago
mixtral-instruct-8x7b-2.10bpw.gguf
12.3 GB
LFS
Adding Mixtral-instruct-8x7b
about 1 year ago
mixtral-instruct-8x7b-2.34bpw.gguf
13.7 GB
LFS
Adding Mistral instruct models
about 1 year ago
nous-hermes-2-10.7b-2.18bpw.gguf
2.92 GB
LFS
Adding Nous-Hermes-2-SOLAR-10.7B 2-bit quants
about 1 year ago
nous-hermes-2-10.7b-2.41bpw.gguf
3.23 GB
LFS
Adding Nous-Hermes 2.31 bpw quantized models
about 1 year ago
nous-hermes-2-10.7b-2.70bpw.gguf
3.62 GB
LFS
Adding Nous-Hermes-2-SOLAR-10.7B 2-bit quants
about 1 year ago
nous-hermes-2-34b-2.16bpw.gguf
9.31 GB
LFS
Adding Nous-Hermes-2-Yi-34B 2-bit quants
about 1 year ago
nous-hermes-2-34b-2.40bpw.gguf
10.3 GB
LFS
Adding Nous-Hermes 2.31 bpw quantized models
about 1 year ago
nous-hermes-2-34b-2.69bpw.gguf
11.6 GB
LFS
Adding Nous-Hermes-2-Yi-34B 2-bit quants
about 1 year ago
rocket-3b-2.31bpw.gguf
808 MB
LFS
Adding Rocket-3b 2-bit quants
about 1 year ago
rocket-3b-2.76bpw.gguf
967 MB
LFS
Adding Rocket-3b 2-bit quants
about 1 year ago