the original model is OriginalModel.mlpackage with float32
the quantization model info :
quantization and input maxlength
coreML: using linear quantize nbit=8
input max = 128
note
i tried turn it into float 16, but it changed too much for its prediction. quantization using linear nbit=8, it works almost like the original.
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yihuizhou/intfloat-multilingual-e5-small-mlpackage
Base model
intfloat/multilingual-e5-small