metadata
library_name: transformers
base_model: dmis-lab/biobert-base-cased-v1.1
tags:
- generated_from_trainer
model-index:
- name: biobert-ner-model
results: []
biobert-ner-model
This model is a fine-tuned version of dmis-lab/biobert-base-cased-v1.1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0386
- Compositemention: {'precision': 0.8, 'recall': 0.9142857142857143, 'f1': 0.8533333333333333, 'number': 35}
- Diseaseclass: {'precision': 0.5341614906832298, 'recall': 0.6825396825396826, 'f1': 0.5993031358885017, 'number': 126}
- Modifier: {'precision': 0.7014925373134329, 'recall': 0.8785046728971962, 'f1': 0.7800829875518672, 'number': 214}
- Specificdisease: {'precision': 0.8254716981132075, 'recall': 0.8495145631067961, 'f1': 0.8373205741626795, 'number': 412}
- Overall Precision: 0.7346
- Overall Recall: 0.8335
- Overall F1: 0.7810
- Overall Accuracy: 0.9934
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Compositemention | Diseaseclass | Modifier | Specificdisease | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.005 | 1.0 | 359 | 0.0271 | {'precision': 0.725, 'recall': 0.8285714285714286, 'f1': 0.7733333333333333, 'number': 35} | {'precision': 0.5263157894736842, 'recall': 0.7142857142857143, 'f1': 0.6060606060606061, 'number': 126} | {'precision': 0.7198443579766537, 'recall': 0.8644859813084113, 'f1': 0.7855626326963906, 'number': 214} | {'precision': 0.8103448275862069, 'recall': 0.7985436893203883, 'f1': 0.80440097799511, 'number': 412} | 0.7243 | 0.8043 | 0.7622 | 0.9939 |
| 0.0049 | 2.0 | 718 | 0.0277 | {'precision': 0.775, 'recall': 0.8857142857142857, 'f1': 0.8266666666666667, 'number': 35} | {'precision': 0.5891472868217055, 'recall': 0.6031746031746031, 'f1': 0.596078431372549, 'number': 126} | {'precision': 0.7195121951219512, 'recall': 0.8271028037383178, 'f1': 0.7695652173913042, 'number': 214} | {'precision': 0.7707423580786026, 'recall': 0.8567961165048543, 'f1': 0.8114942528735631, 'number': 412} | 0.7297 | 0.8094 | 0.7675 | 0.9934 |
| 0.0031 | 3.0 | 1077 | 0.0330 | {'precision': 0.7142857142857143, 'recall': 0.8571428571428571, 'f1': 0.7792207792207793, 'number': 35} | {'precision': 0.4939759036144578, 'recall': 0.6507936507936508, 'f1': 0.5616438356164383, 'number': 126} | {'precision': 0.7368421052631579, 'recall': 0.8504672897196262, 'f1': 0.789587852494577, 'number': 214} | {'precision': 0.8076923076923077, 'recall': 0.866504854368932, 'f1': 0.8360655737704917, 'number': 412} | 0.7258 | 0.8272 | 0.7732 | 0.9935 |
| 0.0008 | 4.0 | 1436 | 0.0324 | {'precision': 0.7567567567567568, 'recall': 0.8, 'f1': 0.7777777777777778, 'number': 35} | {'precision': 0.6014492753623188, 'recall': 0.6587301587301587, 'f1': 0.6287878787878789, 'number': 126} | {'precision': 0.746938775510204, 'recall': 0.8551401869158879, 'f1': 0.7973856209150327, 'number': 214} | {'precision': 0.8389423076923077, 'recall': 0.8470873786407767, 'f1': 0.8429951690821257, 'number': 412} | 0.7691 | 0.8170 | 0.7924 | 0.9940 |
| 0.0019 | 5.0 | 1795 | 0.0314 | {'precision': 0.7804878048780488, 'recall': 0.9142857142857143, 'f1': 0.8421052631578947, 'number': 35} | {'precision': 0.6356589147286822, 'recall': 0.6507936507936508, 'f1': 0.6431372549019608, 'number': 126} | {'precision': 0.7615062761506276, 'recall': 0.8504672897196262, 'f1': 0.8035320088300221, 'number': 214} | {'precision': 0.8148148148148148, 'recall': 0.8543689320388349, 'f1': 0.8341232227488151, 'number': 412} | 0.7705 | 0.8234 | 0.7961 | 0.9939 |
| 0.0017 | 6.0 | 2154 | 0.0386 | {'precision': 0.8, 'recall': 0.9142857142857143, 'f1': 0.8533333333333333, 'number': 35} | {'precision': 0.5341614906832298, 'recall': 0.6825396825396826, 'f1': 0.5993031358885017, 'number': 126} | {'precision': 0.7014925373134329, 'recall': 0.8785046728971962, 'f1': 0.7800829875518672, 'number': 214} | {'precision': 0.8254716981132075, 'recall': 0.8495145631067961, 'f1': 0.8373205741626795, 'number': 412} | 0.7346 | 0.8335 | 0.7810 | 0.9934 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1