Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Dolphin-Mistral-24B-Venice-Edition-GGUF
like
1
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2406.17415
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Dolphin-Mistral-24B-Venice-Edition-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
22 commits
eaddario
Update README.md
43fa3d9
verified
18 days ago
imatrix
Generate imatrices
20 days ago
logits
Generate base model logits
21 days ago
scores
Add GGUF internal file structure
19 days ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
21 days ago
.gitignore
Safe
6.78 kB
Add .gitignore
21 days ago
Dolphin-Mistral-24B-Venice-Edition-F16.gguf
Safe
47.2 GB
LFS
Convert safetensor to GGUF @ F16
21 days ago
Dolphin-Mistral-24B-Venice-Edition-IQ3_M.gguf
Safe
10 GB
LFS
Layer-wise quantization IQ3_M
20 days ago
Dolphin-Mistral-24B-Venice-Edition-IQ3_S.gguf
Safe
9.74 GB
LFS
Layer-wise quantization IQ3_S
20 days ago
Dolphin-Mistral-24B-Venice-Edition-IQ4_NL.gguf
Safe
12.2 GB
LFS
Layer-wise quantization IQ4_NL
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q3_K_L.gguf
Safe
11.4 GB
LFS
Layer-wise quantization Q3_K_L
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q3_K_M.gguf
Safe
10.5 GB
LFS
Layer-wise quantization Q3_K_M
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q3_K_S.gguf
Safe
9.34 GB
LFS
Layer-wise quantization Q3_K_S
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf
Safe
13.1 GB
LFS
Layer-wise quantization Q4_K_M
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q4_K_S.gguf
Safe
12.3 GB
LFS
Layer-wise quantization Q4_K_S
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q5_K_M.gguf
Safe
15.2 GB
LFS
Layer-wise quantization Q5_K_M
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q5_K_S.gguf
Safe
14.7 GB
LFS
Layer-wise quantization Q5_K_S
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q6_K.gguf
Safe
17.7 GB
LFS
Layer-wise quantization Q6_K
20 days ago
Dolphin-Mistral-24B-Venice-Edition-Q8_0.gguf
Safe
23.1 GB
LFS
Layer-wise quantization Q8_0
20 days ago
README.md
23.6 kB
Update README.md
18 days ago