This is a checkpoint for quantization using llm-compressor, supporting vllm, sglang inference.
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for jiangchengchengNLP/L3.3-MS-Nevoria-70b-w8a16
Base model
Steelskull/L3.3-MS-Nevoria-70b