Create README.md
Browse filesMistral-7B-Instruct-v0.3 quantized with mixed precision:
This is a Mistral-7B-Instruct model where the embedding layer and output (head) layer are quantized to 6-bit precision, while the rest of the model uses 4-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved accuracy in critical layers.