80% 1x4 Block Sparse BERT-Base (uncased) Prune OFA

This model is was created using Prune OFA method described in Prune Once for All: Sparse Pre-Trained Language Models presented in ENLSP NeurIPS Workshop 2021.

For further details on the model and its result, see our paper and our implementation available here.

Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa

Collection including Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa