indic-mALBERT-static-INT8-squad-v2

This model is a static-INT8 Quantized version of indic-mALBERT-squad-v2 on the squad_v2 dataset. Please Note that we use Intel庐 Neural Compressor for INT8 Quantization.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support