DO NOT USE. Currently vLLM does not support FP8 Dynamic on FP8 Fused MoE layers. Keeping this model for later use.
Run cmd:
NB_GPU=8
docker run --rm --runtime nvidia --gpus 'all' --ipc=host -e "OMP_NUM_THREADS=$(nproc)" -e 'HF_TOKEN' -v '/root/.cache/huggingface:/root/.cache/huggingface' -p 8000:8000 'vllm/vllm-openai:v0.7.3' --host 0.0.0.0 --port 8000 --trust-remote-code --tensor-parallel-size $NB_GPU --served-model-name deepseek-reasoner --enable-reasoning --reasoning-parser deepseek_r1 --model 'ig1/r1-1776-FP8-Dynamic' --override-generation-config '{"temperature":0.6,"top_p":0.95}' --enable-chunked-prefill=False
Error:
For FP8 Fused MoE layers, only per-tensor scalesfor weights and activations are supported. Found num_bits=8 type='float' symmetric=True group_size=None strategy='channel' block_structure=None dynamic=False actorder=None observer='minmax' observer_kwargs={}, num_bits=8 type='float' symmetric=True group_size=None strategy='token' block_structure=None dynamic=True actorder=None observer=None observer_kwargs={}
- Downloads last month
- 41
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.