GGUF quants of nvidia/AceMath-72B-Instruct
Using llama.cpp b4682 (commit 0893e0114e934bdd0eba0ff69d9ef8c59343cbc3)
The importance matrix was generated with groups_merged-enhancedV3.txt by InferenceIllusionist (later renamed calibration_datav3.txt), an edited version of kalomaze's original groups_merged.txt.
All quants were generated/calibrated with the imatrix, including the K quants.
- Downloads last month
- 323
Hardware compatibility
Log In
to view the estimation
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support