Sparse BERT mini model (uncased)

Finetuned model pruned to 1:4 structured sparsity. The model is a pruned version of the BERT mini model.

Intended Use

The model can be used for inference with sparsity optimization. For further details on the model and its usage will be soon available.

Evaluation Results

We get the following results on the sst2 tasks development set:

Task SST-2 (Acc)
87.2
Better than dense bert mini which is 84.74%.
Downloads last month
19
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including Intel/bert-mini-sst2-distilled-sparse-90-1X4-block