Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bartowski
/
Lamarck-14B-v0.7-GGUF
like
5
Text Generation
GGUF
English
mergekit
Merge
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
7d2e383
Lamarck-14B-v0.7-GGUF
1 contributor
History:
9 commits
bartowski
Upload Lamarck-14B-v0.7-Q4_K_M.gguf with huggingface_hub
7d2e383
verified
about 1 month ago
.gitattributes
2.04 kB
Upload Lamarck-14B-v0.7-Q4_K_M.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q4_K_L.gguf
9.56 GB
LFS
Upload Lamarck-14B-v0.7-Q4_K_L.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q4_K_M.gguf
8.99 GB
LFS
Upload Lamarck-14B-v0.7-Q4_K_M.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q5_K_L.gguf
11 GB
LFS
Upload Lamarck-14B-v0.7-Q5_K_L.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q5_K_M.gguf
10.5 GB
LFS
Upload Lamarck-14B-v0.7-Q5_K_M.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q5_K_S.gguf
10.3 GB
LFS
Upload Lamarck-14B-v0.7-Q5_K_S.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q6_K.gguf
12.1 GB
LFS
Upload Lamarck-14B-v0.7-Q6_K.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q6_K_L.gguf
12.5 GB
LFS
Upload Lamarck-14B-v0.7-Q6_K_L.gguf with huggingface_hub
about 1 month ago
Lamarck-14B-v0.7-Q8_0.gguf
15.7 GB
LFS
Upload Lamarck-14B-v0.7-Q8_0.gguf with huggingface_hub
about 1 month ago