A Chinese MRC model built on Chinese PERT-large

Please use BertForQuestionAnswering to load this model!

This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets.

PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling.

Results on Chinese MRC datasets (EM/F1):

(We report the checkpoint that has the best AVG score)

CMRC 2018 Dev DRCD Dev SQuAD-Zen Dev (Answerable) AVG
PERT-large 73.5/90.8 91.2/95.7 63.0/79.3 75.9/88.6

Please visit our GitHub repo for more information: https://github.com/ymcui/PERT

You may also be interested in,

Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer

More resources by HFL: https://github.com/ymcui/HFL-Anthology

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including hfl/chinese-pert-large-mrc