bros-funsd-finetuned
This model is a fine-tuned version of naver-clova-ocr/bros-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 1.7866
- Precision: 0.5993
- Recall: 0.6416
- F1: 0.6197
- Accuracy: 0.7016
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 10 | 1.6503 | 0.0207 | 0.0032 | 0.0055 | 0.3213 |
No log | 2.0 | 20 | 1.5622 | 0.1480 | 0.0596 | 0.0850 | 0.3890 |
No log | 3.0 | 30 | 1.5357 | 0.0770 | 0.0672 | 0.0717 | 0.3803 |
No log | 4.0 | 40 | 1.5160 | 0.1058 | 0.0976 | 0.1015 | 0.4078 |
No log | 5.0 | 50 | 1.4925 | 0.1608 | 0.1768 | 0.1684 | 0.4354 |
No log | 6.0 | 60 | 1.4216 | 0.2011 | 0.2288 | 0.2141 | 0.4571 |
No log | 7.0 | 70 | 1.3546 | 0.2565 | 0.3241 | 0.2864 | 0.5001 |
No log | 8.0 | 80 | 1.2950 | 0.2829 | 0.3818 | 0.3250 | 0.5048 |
No log | 9.0 | 90 | 1.2862 | 0.2909 | 0.3745 | 0.3275 | 0.5226 |
No log | 10.0 | 100 | 1.2108 | 0.2911 | 0.3815 | 0.3302 | 0.5491 |
No log | 11.0 | 110 | 1.2023 | 0.3348 | 0.3609 | 0.3474 | 0.5545 |
No log | 12.0 | 120 | 1.1720 | 0.3616 | 0.4030 | 0.3812 | 0.5668 |
No log | 13.0 | 130 | 1.1267 | 0.3600 | 0.4005 | 0.3792 | 0.5825 |
No log | 14.0 | 140 | 1.1025 | 0.3677 | 0.4499 | 0.4047 | 0.6144 |
No log | 15.0 | 150 | 1.1038 | 0.3914 | 0.4655 | 0.4252 | 0.6182 |
No log | 16.0 | 160 | 1.1034 | 0.4144 | 0.4769 | 0.4434 | 0.6399 |
No log | 17.0 | 170 | 1.1885 | 0.4136 | 0.5250 | 0.4627 | 0.6303 |
No log | 18.0 | 180 | 1.1734 | 0.4652 | 0.4854 | 0.4751 | 0.6491 |
No log | 19.0 | 190 | 1.2263 | 0.4312 | 0.5995 | 0.5016 | 0.6457 |
No log | 20.0 | 200 | 1.2326 | 0.4482 | 0.5612 | 0.4984 | 0.6478 |
No log | 21.0 | 210 | 1.1374 | 0.4892 | 0.5954 | 0.5371 | 0.6776 |
No log | 22.0 | 220 | 1.2278 | 0.4939 | 0.5779 | 0.5326 | 0.6712 |
No log | 23.0 | 230 | 1.2979 | 0.4728 | 0.6030 | 0.5300 | 0.6642 |
No log | 24.0 | 240 | 1.3170 | 0.4885 | 0.5916 | 0.5351 | 0.6682 |
No log | 25.0 | 250 | 1.3692 | 0.4746 | 0.6011 | 0.5304 | 0.6596 |
No log | 26.0 | 260 | 1.3706 | 0.5121 | 0.6106 | 0.5570 | 0.6742 |
No log | 27.0 | 270 | 1.4494 | 0.5195 | 0.6036 | 0.5584 | 0.6719 |
No log | 28.0 | 280 | 1.4790 | 0.5207 | 0.6027 | 0.5587 | 0.6678 |
No log | 29.0 | 290 | 1.4106 | 0.5499 | 0.5887 | 0.5686 | 0.6838 |
No log | 30.0 | 300 | 1.4539 | 0.5607 | 0.5954 | 0.5775 | 0.6810 |
No log | 31.0 | 310 | 1.4746 | 0.5681 | 0.5989 | 0.5831 | 0.6827 |
No log | 32.0 | 320 | 1.5373 | 0.5233 | 0.6144 | 0.5652 | 0.6698 |
No log | 33.0 | 330 | 1.6007 | 0.5131 | 0.6353 | 0.5677 | 0.6682 |
No log | 34.0 | 340 | 1.5237 | 0.5392 | 0.6489 | 0.5890 | 0.6868 |
No log | 35.0 | 350 | 1.5382 | 0.5439 | 0.6239 | 0.5812 | 0.6908 |
No log | 36.0 | 360 | 1.5363 | 0.5615 | 0.6071 | 0.5834 | 0.6872 |
No log | 37.0 | 370 | 1.5504 | 0.5572 | 0.6201 | 0.5870 | 0.6943 |
No log | 38.0 | 380 | 1.6496 | 0.5478 | 0.6176 | 0.5806 | 0.6796 |
No log | 39.0 | 390 | 1.6083 | 0.5665 | 0.6144 | 0.5895 | 0.6913 |
No log | 40.0 | 400 | 1.5588 | 0.5719 | 0.6239 | 0.5968 | 0.6977 |
No log | 41.0 | 410 | 1.6280 | 0.5578 | 0.6328 | 0.5929 | 0.6928 |
No log | 42.0 | 420 | 1.5925 | 0.5842 | 0.6112 | 0.5974 | 0.7023 |
No log | 43.0 | 430 | 1.5921 | 0.5810 | 0.6204 | 0.6001 | 0.6981 |
No log | 44.0 | 440 | 1.6152 | 0.5740 | 0.6207 | 0.5964 | 0.6917 |
No log | 45.0 | 450 | 1.6629 | 0.5634 | 0.6283 | 0.5941 | 0.6853 |
No log | 46.0 | 460 | 1.6112 | 0.5829 | 0.6214 | 0.6015 | 0.7021 |
No log | 47.0 | 470 | 1.6214 | 0.5761 | 0.6258 | 0.5999 | 0.6982 |
No log | 48.0 | 480 | 1.6216 | 0.5953 | 0.6119 | 0.6034 | 0.7023 |
No log | 49.0 | 490 | 1.6592 | 0.5809 | 0.6163 | 0.5981 | 0.6962 |
0.4349 | 50.0 | 500 | 1.6796 | 0.5603 | 0.6489 | 0.6014 | 0.6947 |
0.4349 | 51.0 | 510 | 1.6835 | 0.5967 | 0.6001 | 0.5984 | 0.6933 |
0.4349 | 52.0 | 520 | 1.6615 | 0.5832 | 0.6553 | 0.6171 | 0.6999 |
0.4349 | 53.0 | 530 | 1.6553 | 0.5778 | 0.6565 | 0.6147 | 0.6970 |
0.4349 | 54.0 | 540 | 1.6980 | 0.5946 | 0.6004 | 0.5975 | 0.6888 |
0.4349 | 55.0 | 550 | 1.6484 | 0.5694 | 0.6356 | 0.6007 | 0.6960 |
0.4349 | 56.0 | 560 | 1.6996 | 0.5902 | 0.6293 | 0.6091 | 0.6941 |
0.4349 | 57.0 | 570 | 1.6973 | 0.5780 | 0.6337 | 0.6046 | 0.6947 |
0.4349 | 58.0 | 580 | 1.7212 | 0.5973 | 0.6087 | 0.6030 | 0.6969 |
0.4349 | 59.0 | 590 | 1.7086 | 0.5791 | 0.6435 | 0.6096 | 0.6976 |
0.4349 | 60.0 | 600 | 1.6767 | 0.5845 | 0.6233 | 0.6033 | 0.6996 |
0.4349 | 61.0 | 610 | 1.6744 | 0.5886 | 0.6201 | 0.6039 | 0.6993 |
0.4349 | 62.0 | 620 | 1.6783 | 0.5989 | 0.6286 | 0.6134 | 0.6999 |
0.4349 | 63.0 | 630 | 1.6958 | 0.5936 | 0.6489 | 0.6200 | 0.7019 |
0.4349 | 64.0 | 640 | 1.7297 | 0.5806 | 0.6286 | 0.6037 | 0.6941 |
0.4349 | 65.0 | 650 | 1.7373 | 0.5804 | 0.6540 | 0.6150 | 0.6961 |
0.4349 | 66.0 | 660 | 1.7579 | 0.5818 | 0.6404 | 0.6097 | 0.6941 |
0.4349 | 67.0 | 670 | 1.7654 | 0.5889 | 0.6369 | 0.6120 | 0.6971 |
0.4349 | 68.0 | 680 | 1.7649 | 0.5846 | 0.6515 | 0.6162 | 0.6953 |
0.4349 | 69.0 | 690 | 1.7294 | 0.5940 | 0.6445 | 0.6182 | 0.6999 |
0.4349 | 70.0 | 700 | 1.7256 | 0.5871 | 0.6511 | 0.6175 | 0.7021 |
0.4349 | 71.0 | 710 | 1.7303 | 0.5889 | 0.6518 | 0.6187 | 0.7029 |
0.4349 | 72.0 | 720 | 1.7391 | 0.5994 | 0.6334 | 0.6159 | 0.7023 |
0.4349 | 73.0 | 730 | 1.7270 | 0.5838 | 0.6448 | 0.6128 | 0.6999 |
0.4349 | 74.0 | 740 | 1.7357 | 0.6060 | 0.6324 | 0.6189 | 0.7035 |
0.4349 | 75.0 | 750 | 1.7210 | 0.6030 | 0.6362 | 0.6192 | 0.7036 |
0.4349 | 76.0 | 760 | 1.7575 | 0.5903 | 0.6473 | 0.6175 | 0.6990 |
0.4349 | 77.0 | 770 | 1.7530 | 0.5859 | 0.6416 | 0.6125 | 0.6958 |
0.4349 | 78.0 | 780 | 1.7395 | 0.5865 | 0.6445 | 0.6141 | 0.6988 |
0.4349 | 79.0 | 790 | 1.7432 | 0.5900 | 0.6575 | 0.6219 | 0.7025 |
0.4349 | 80.0 | 800 | 1.7497 | 0.5957 | 0.6556 | 0.6242 | 0.7039 |
0.4349 | 81.0 | 810 | 1.7590 | 0.6003 | 0.6467 | 0.6226 | 0.7040 |
0.4349 | 82.0 | 820 | 1.7641 | 0.5979 | 0.6413 | 0.6189 | 0.7019 |
0.4349 | 83.0 | 830 | 1.7632 | 0.6103 | 0.6407 | 0.6251 | 0.7070 |
0.4349 | 84.0 | 840 | 1.7602 | 0.6082 | 0.6420 | 0.6246 | 0.7066 |
0.4349 | 85.0 | 850 | 1.7697 | 0.6014 | 0.6458 | 0.6228 | 0.7051 |
0.4349 | 86.0 | 860 | 1.7828 | 0.5945 | 0.6397 | 0.6163 | 0.7001 |
0.4349 | 87.0 | 870 | 1.7834 | 0.6005 | 0.6369 | 0.6182 | 0.7005 |
0.4349 | 88.0 | 880 | 1.7760 | 0.5966 | 0.6388 | 0.6170 | 0.7013 |
0.4349 | 89.0 | 890 | 1.7757 | 0.5942 | 0.6426 | 0.6174 | 0.7021 |
0.4349 | 90.0 | 900 | 1.7755 | 0.5946 | 0.6442 | 0.6184 | 0.7025 |
0.4349 | 91.0 | 910 | 1.7778 | 0.5964 | 0.6432 | 0.6189 | 0.7012 |
0.4349 | 92.0 | 920 | 1.7757 | 0.5993 | 0.6435 | 0.6206 | 0.7019 |
0.4349 | 93.0 | 930 | 1.7751 | 0.6014 | 0.6448 | 0.6223 | 0.7025 |
0.4349 | 94.0 | 940 | 1.7769 | 0.6024 | 0.6410 | 0.6211 | 0.7025 |
0.4349 | 95.0 | 950 | 1.7791 | 0.6026 | 0.6394 | 0.6204 | 0.7020 |
0.4349 | 96.0 | 960 | 1.7862 | 0.6016 | 0.6381 | 0.6193 | 0.7012 |
0.4349 | 97.0 | 970 | 1.7876 | 0.5985 | 0.6410 | 0.6190 | 0.7007 |
0.4349 | 98.0 | 980 | 1.7882 | 0.5976 | 0.6404 | 0.6182 | 0.7012 |
0.4349 | 99.0 | 990 | 1.7870 | 0.5988 | 0.6413 | 0.6193 | 0.7014 |
0.0052 | 100.0 | 1000 | 1.7866 | 0.5993 | 0.6416 | 0.6197 | 0.7016 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for adamadam111/bros-funsd-finetuned
Base model
naver-clova-ocr/bros-base-uncasedEvaluation results
- Precision on funsdtest set self-reported0.599
- Recall on funsdtest set self-reported0.642
- F1 on funsdtest set self-reported0.620
- Accuracy on funsdtest set self-reported0.702