FP8 activation quantization performed with llm-compressor
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ig1/Qwen2.5-VL-72B-Instruct-FP8-Dynamic
Unable to build the model tree, the base model loops to the model itself. Learn more.