Ministral-8B-Instruct-2410 quantized with mixed precision: This is a Ministral-8B-Instruct model where the embedding layer and output (head) layer are quantized to 6-bit precision, while the rest of the model uses 4-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved precision in critical layers.
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for dgomes03/Ministral-8B-Instruct-mixed-4-6-bit
Base model
mistralai/Ministral-8B-Instruct-2410