Qwen 2.5 72B Instruct Dynamic FP8

This is FP8 Dynamic Quantization (A8W8) for https://huggingface.co/Qwen/Qwen2.5-72B-Instruct, we use it for vLLM==0.8.5.post1 and above.

Downloads last month
12
Safetensors
Model size
72.7B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Qwen2.5-72B-Instruct-FP8

Base model

Qwen/Qwen2.5-72B
Quantized
(85)
this model