Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Abliterated version of Qwen/Qwen2.5-72B-Instruct, utilizing code from refusal_direction.

For more information about the Abliterated technique, refer to this article and check out @FailSpy.

GGUF

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for gghfez/zetasepic_Qwen2.5-72B-Instruct-abliterated-exl2-4.5bpw

Base model

Qwen/Qwen2.5-72B
Quantized
(79)
this model