Antigma/DeepSeek-R1-Distill-Qwen-14B-GGUF
This model was converted to GGUF format from deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
- Downloads last month
- 22
Hardware compatibility
Log In
to view the estimation
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for Antigma/DeepSeek-R1-Distill-Qwen-14B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B