Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
Jinx-Qwen3-14B
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Jinx-Qwen3-14B
88 GB
1 contributor
History:
17 commits
marcelone
Upload imatrix.gguf
b08f978
verified
about 2 months ago
.gitattributes
Safe
2.46 kB
Upload imatrix.gguf
about 2 months ago
Jinx-Qwen3-14B-F16.gguf
29.5 GB
xet
Upload folder using huggingface_hub
about 2 months ago
Jinx-Qwen3-14B-Q4_K_M.gguf
Safe
9 GB
xet
Rename Jinx-Qwen3-14B-Q4_K.gguf to Jinx-Qwen3-14B-Q4_K_M.gguf
about 2 months ago
Jinx-Qwen3-14B-Q4_K_M_L.gguf
9 GB
xet
Rename Jinx-Qwen3-14B-Q4_1_L.gguf to Jinx-Qwen3-14B-Q4_K_M_L.gguf
about 2 months ago
Jinx-Qwen3-14B-Q4_K_M_XL.gguf
9.2 GB
xet
Upload Jinx-Qwen3-14B-Q4_K_M_XL.gguf
about 2 months ago
Jinx-Qwen3-14B-Q4_K_M_XXL.gguf
9.58 GB
xet
Rename Jinx-Qwen3-14B-Q4_2_XL.gguf to Jinx-Qwen3-14B-Q4_K_M_XXL.gguf
about 2 months ago
Jinx-Qwen3-14B-Q4_K_M_XXXL.gguf
11 GB
xet
Rename Jinx-Qwen3-14B-Q4_3_XL.gguf to Jinx-Qwen3-14B-Q4_K_M_XXXL.gguf
about 2 months ago
Jinx-Qwen3-14B-Q5_K_M_L.gguf
10.6 GB
xet
Upload Jinx-Qwen3-14B-Q5_K_M_L.gguf
about 2 months ago
README.md
Safe
94 Bytes
Update README.md
about 2 months ago
imatrix.gguf
7.74 MB
xet
Upload imatrix.gguf
about 2 months ago