vllm (pretrained=nm-testing/DeepSeek-Coder-V2-Lite-Instruct-quantized.w8a8,max_model_len=2048,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.7513|±  |0.0119|
|     |       |strict-match    |     5|exact_match|↑  |0.7301|±  |0.0122|
Downloads last month
11
Safetensors
Model size
15.7B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nm-testing/DeepSeek-Coder-V2-Lite-Instruct-quantized.w8a8

Quantized
(51)
this model