Edit model card

test-ner

This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4431
  • Overall Precision: 0.7848
  • Overall Recall: 0.7371
  • Overall F1: 0.7602
  • Overall Accuracy: 0.8909
  • Cw F1: 0.0435
  • Date F1: 0.8512
  • Eve F1: 0.3552
  • Gpe F1: 0.2694
  • Loc F1: 0.8575
  • Misc F1: 0.0
  • Obj F1: 0.5506
  • Org F1: 0.6249
  • Per F1: 0.9249
  • Time F1: 0.2662

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Overall Precision Overall Recall Overall F1 Overall Accuracy Cw F1 Date F1 Eve F1 Gpe F1 Loc F1 Misc F1 Obj F1 Org F1 Per F1 Time F1
No log 1.0 53 0.9845 0.5582 0.5316 0.5445 0.7825 0.0 0.5253 0.0 0.0 0.6964 0.0 0.0105 0.0254 0.6707 0.0
No log 2.0 106 0.6825 0.6836 0.6160 0.6481 0.8338 0.0 0.7518 0.0 0.0090 0.7787 0.0 0.0665 0.3462 0.8034 0.0302
No log 3.0 159 0.5386 0.7556 0.6740 0.7124 0.8678 0.0442 0.8097 0.1012 0.1431 0.8312 0.0 0.3589 0.4756 0.8770 0.2222
No log 4.0 212 0.4683 0.7716 0.7283 0.7493 0.8859 0.0333 0.8403 0.3259 0.2372 0.8473 0.0 0.5455 0.6094 0.9123 0.1927
No log 5.0 265 0.4431 0.7848 0.7371 0.7602 0.8909 0.0435 0.8512 0.3552 0.2694 0.8575 0.0 0.5506 0.6249 0.9249 0.2662

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
19
Safetensors
Model size
177M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for farihashifa/test-ner

Finetuned
(505)
this model