bert-base-uncased finetuned on MNLI

Model Details and Training Data

We used the pretrained model from bert-base-uncased and finetuned it on MultiNLI dataset.

The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).

Evaluation Results

The evaluation results are mentioned in the table below.

Test Corpus Accuracy
Matched 0.8456
Mismatched 0.8484
Downloads last month
3,205
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ishan/bert-base-uncased-mnli

Adapters
4 models