QuantFactory Banner

QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF

This is quantized version of huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 created using llama.cpp

Original Model Card

huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2

This is an uncensored version of deepseek-ai/DeepSeek-R1-Distill-Qwen-14B created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Important Note This version is an improvement over the previous one huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. This model solves this problem.

Use with ollama

You can use huihui_ai/deepseek-r1-abliterated directly

ollama run huihui_ai/deepseek-r1-abliterated:14b
Downloads last month
1,801
GGUF
Model size
14.8B params
Architecture
qwen2

4-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF

Quantized
(67)
this model