Fix model config
#4 opened 3 months ago
by
BitPhinix
The given huggingface model architecture DeepseekV3ForCausalLM is not supported in TRT-LLM yet
#3 opened 3 months ago
by
wpfnnnns
How to run model on 8xH200
#2 opened 4 months ago
by
U2hhd24
Remove quantization_config from config.json
#1 opened 4 months ago
by
alphatozeta