CorticalStack/neurotic-crown-clown-7b-ties-awq

neurotic-crown-clown-image

CorticalStack/neurotic-crown-clown-7b-ties-awq is an AWQ quantised version of CorticalStack/neurotic-crown-clown-7b-ties.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

AWQ configuration

  • Zero point: True
  • Q group size: 128
  • W bit: 4
  • Version: GEMM
Downloads last month
11
Safetensors
Model size
1.2B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support