NER-Advance
This model is a fine-tuned version of xlm-roberta-base on an unimelb-nlp/wikiann dataset. It achieves the following results on the evaluation set:
• Loss: 0.27204614877700806
• F1: 0.9195951994165037
Training hyperparameters
The following hyperparameters were used during training:
• learning_rate: 2e-5
• train_batch_size: 24
• eval_batch_size: 24
• seed: 42
• weight_decay=0.01
• lr_scheduler_type: linear
• warmup_ratio=0.1
• num_epochs: 2
Training results
Training Loss | Epoch | Validation Loss | F1
0.2895 1.0 0.3054 0.8916
0.2422 2.0 0.2720 0.9195
Framework versions
• Transformers 4.38.2
• Pytorch 2.2.1+cu121
• Datasets 2.18.0
• Tokenizers 0.15.2
- Downloads last month
- 29
Model tree for Keyurjotaniya007/xlm-roberta-base-wikiann-ner
Base model
FacebookAI/xlm-roberta-base