Multilingual Natural Language Inference (XNLI) β XLM-R Tokenizer
This model is a fine-tuned version of xlm-roberta-base on an xnli[all_languages]. It achieves the following results on the evaluation set:
β’ Loss: 0.8065
β’ F1: 0.6668
Training hyperparameters
The following hyperparameters were used during training:
β’ learning_rate: 2e-5
β’ train_batch_size: 8
β’ eval_batch_size: 8
β’ seed: 42
β’ weight_decay=0.01
β’ warmup_ratio=0.1
β’ num_epochs: 1
Training results
Training Loss | Epoch | Validation Loss | F1
0.6133 | 1 | 0.8065 | 0.6668
Framework versions
β’ Transformers 4.38.2
β’ Pytorch 2.2.1+cu121
β’ Datasets 2.18.0
β’ Tokenizers 0.15.2
- Downloads last month
- 26
Model tree for Keyurjotaniya007/xlm-roberta-base-xnli-multilingual
Base model
FacebookAI/xlm-roberta-base