Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
**gpt-oss-20B-BF16-Unquantized**
|
2 |
|
3 |
-
Unquantized GGUF BF16 model weights for gpt-oss-20B (the MoE layers are all unquantized from MXFP4). Happy quantizing! π
|
|
|
1 |
**gpt-oss-20B-BF16-Unquantized**
|
2 |
|
3 |
+
Unquantized GGUF BF16 model weights for gpt-oss-20B (the MoE layers are all unquantized from MXFP4 to BF16). Happy quantizing! π
|