Qwen3-Next-80B-A3B-Instruct-NVFP4

Quantized version of Qwen/Qwen3-Next-80B-A3B-Instruct using LLM Compressor and the NVFP4 (E2M1 + E4M3) format.

This time it actually works! We think

This should be the start of a new series of hopefully optimal NVFP4 quantizations as capable cards continue to grow out in the wild.


Model Summary

Property Value
Base model Qwen/Qwen3-Next-80B-A3B-Instruct
Quantization NVFP4 (FP4 microscaling, block = 16, scale = E4M3)
Method Post-Training Quantization with LLM Compressor
Toolchain LLM Compressor
Hardware target NVIDIA Blackwell (Untested on RTX cards) / GB200 Tensor Cores
Precision Weights & activations = FP4 • Scales = FP8 (E4M3)
Maintainer RESMP.DEV

Description

This model is a drop-in replacement for Qwen/Qwen3-Next-80B-A3B-Instruct that runs in NVFP4 precision Accuracy remains within ≈ 1 % of the FP8 baseline on standard reasoning and coding benchmarks.

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RESMP-DEV/Qwen3-Next-80B-A3B-Instruct-NVFP4

Quantized
(37)
this model