--- base_model: dmis-lab/biobert-base-cased-v1.1 tags: - generated_from_trainer model-index: - name: ner-cdr-finetuned results: [] --- # ner-cdr-finetuned This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1871 - Chemical: {'precision': 0.9215938303341902, 'recall': 0.9263565891472868, 'f1': 0.9239690721649483, 'number': 774} - Disease: {'precision': 0.7865546218487395, 'recall': 0.8327402135231317, 'f1': 0.8089887640449438, 'number': 562} - Overall Precision: 0.8631 - Overall Recall: 0.8870 - Overall F1: 0.8749 - Overall Accuracy: 0.9480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chemical | Disease | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | No log | 1.0 | 57 | 0.2923 | {'precision': 0.8410757946210269, 'recall': 0.8888888888888888, 'f1': 0.864321608040201, 'number': 774} | {'precision': 0.5264483627204031, 'recall': 0.7437722419928826, 'f1': 0.616519174041298, 'number': 562} | 0.6861 | 0.8278 | 0.7503 | 0.8980 | | No log | 2.0 | 114 | 0.2079 | {'precision': 0.9134487350199734, 'recall': 0.8863049095607235, 'f1': 0.8996721311475411, 'number': 774} | {'precision': 0.7549909255898367, 'recall': 0.7402135231316725, 'f1': 0.7475292003593891, 'number': 562} | 0.8464 | 0.8249 | 0.8355 | 0.9339 | | No log | 3.0 | 171 | 0.1910 | {'precision': 0.9088575096277278, 'recall': 0.9147286821705426, 'f1': 0.9117836445589182, 'number': 774} | {'precision': 0.7578125, 'recall': 0.8629893238434164, 'f1': 0.8069883527454244, 'number': 562} | 0.8407 | 0.8930 | 0.8661 | 0.9464 | | No log | 4.0 | 228 | 0.1856 | {'precision': 0.9296875, 'recall': 0.9224806201550387, 'f1': 0.9260700389105059, 'number': 774} | {'precision': 0.770764119601329, 'recall': 0.8256227758007118, 'f1': 0.7972508591065292, 'number': 562} | 0.8599 | 0.8817 | 0.8707 | 0.9461 | | No log | 5.0 | 285 | 0.1871 | {'precision': 0.9215938303341902, 'recall': 0.9263565891472868, 'f1': 0.9239690721649483, 'number': 774} | {'precision': 0.7865546218487395, 'recall': 0.8327402135231317, 'f1': 0.8089887640449438, 'number': 562} | 0.8631 | 0.8870 | 0.8749 | 0.9480 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.7.0+cu118 - Datasets 3.6.0 - Tokenizers 0.19.1