Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,7 @@ This should be the start of a new series of *hopefully optimal* NVFP4 quantizati
|
|
| 26 |
| Quantization | NVFP4 (FP4 microscaling, block = 16, scale = E4M3) |
|
| 27 |
| Method | Post-Training Quantization with LLM Compressor |
|
| 28 |
| Toolchain | LLM Compressor |
|
| 29 |
-
| Hardware target | NVIDIA Blackwell(Untested on RTX cards) / GB200 Tensor Cores |
|
| 30 |
| Precision | Weights & activations = FP4 • Scales = FP8 (E4M3) |
|
| 31 |
| Maintainer | **REMSP.DEV** |
|
| 32 |
|
|
|
|
| 26 |
| Quantization | NVFP4 (FP4 microscaling, block = 16, scale = E4M3) |
|
| 27 |
| Method | Post-Training Quantization with LLM Compressor |
|
| 28 |
| Toolchain | LLM Compressor |
|
| 29 |
+
| Hardware target | NVIDIA Blackwell (Untested on RTX cards) / GB200 Tensor Cores |
|
| 30 |
| Precision | Weights & activations = FP4 • Scales = FP8 (E4M3) |
|
| 31 |
| Maintainer | **REMSP.DEV** |
|
| 32 |
|