GGUF Quantised Models for Qwen2.5_0.5B_Instruct_Johnny_Silverhand_Merged
This repository contains GGUF format model files for lewiswatson/Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged
, quantised.
Original Model
The original fine-tuned model used to generate these quantisations can be found here: lewiswatson/Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged
Provided Files (GGUF)
File | Size |
---|---|
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.IQ4_XS.gguf |
335.16 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q2_K.gguf |
322.92 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_L.gguf |
352.25 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_M.gguf |
339.00 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_S.gguf |
322.59 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q4_K_M.gguf |
379.38 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q4_K_S.gguf |
367.61 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q5_K_M.gguf |
400.62 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q5_K_S.gguf |
393.59 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q6_K.gguf |
482.31 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.Q8_0.gguf |
506.47 MB |
Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged.f16.gguf |
948.10 MB |
This repository was automatically created using a script on 2025-04-14.
- Downloads last month
- 34
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lewiswatson/Qwen2.5-0.5B-Instruct_Johnny_Silverhand_Merged-GGUF
Base model
Qwen/Qwen2.5-0.5B
Finetuned
Qwen/Qwen2.5-0.5B-Instruct