Disclaimer:

The model is reproduced based on the paper VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models github and arXiv

The model itself is sourced from a community release.

It is intended only for experimental purposes.

Users are responsible for any consequences arising from the use of this model.

Downloads last month
16
Safetensors
Model size
6.87B params
Tensor type
BF16
·
I32
·
I16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for VPTQ-community/Meta-Llama-3.3-70B-Instruct-v16-k65536-16384-woft

Quantized
(80)
this model

Collection including VPTQ-community/Meta-Llama-3.3-70B-Instruct-v16-k65536-16384-woft