dgomes03 commited on
Commit
e97dcde
·
verified ·
1 Parent(s): d732de4

Create README.md

Browse files

Mistral-7B-Instruct-v0.3 quantized with mixed precision:
This is a Mistral-7B-Instruct model where the embedding layer and output (head) layer are quantized to 6-bit precision, while the rest of the model uses 4-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved accuracy in critical layers.

Files changed (1) hide show
  1. README.md +5 -0
README.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - mistralai/Mistral-7B-Instruct-v0.3
5
+ ---