rtdetr-v2-r34-cppe5-finetune-2
This model is a fine-tuned version of PekingU/rtdetr_v2_r34vd on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 8.3132
- Map: 0.3055
- Map 50: 0.5372
- Map 75: 0.3008
- Map Small: 0.1094
- Map Medium: 0.241
- Map Large: 0.3888
- Mar 1: 0.2863
- Mar 10: 0.4999
- Mar 100: 0.578
- Mar Small: 0.3884
- Mar Medium: 0.4832
- Mar Large: 0.7099
- Map Coverall: 0.5596
- Mar 100 Coverall: 0.7275
- Map Face Shield: 0.2013
- Mar 100 Face Shield: 0.6342
- Map Gloves: 0.2742
- Mar 100 Gloves: 0.5192
- Map Goggles: 0.146
- Mar 100 Goggles: 0.4769
- Map Mask: 0.3462
- Mar 100 Mask: 0.5324
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 107 | 21.6622 | 0.0574 | 0.1029 | 0.0519 | 0.0001 | 0.0074 | 0.0702 | 0.0808 | 0.1764 | 0.2386 | 0.0255 | 0.1314 | 0.3738 | 0.263 | 0.6131 | 0.0098 | 0.1962 | 0.0056 | 0.1446 | 0.0002 | 0.0692 | 0.0085 | 0.1698 |
No log | 2.0 | 214 | 12.3093 | 0.1473 | 0.2769 | 0.1347 | 0.04 | 0.108 | 0.2006 | 0.1941 | 0.3837 | 0.4464 | 0.1571 | 0.3411 | 0.6232 | 0.3946 | 0.6973 | 0.0871 | 0.438 | 0.067 | 0.392 | 0.0198 | 0.3308 | 0.1682 | 0.3742 |
No log | 3.0 | 321 | 9.8181 | 0.2151 | 0.3903 | 0.2003 | 0.0862 | 0.1856 | 0.2692 | 0.2513 | 0.4746 | 0.5437 | 0.2899 | 0.4531 | 0.6932 | 0.4504 | 0.7221 | 0.1277 | 0.5734 | 0.1439 | 0.471 | 0.0493 | 0.4523 | 0.304 | 0.4996 |
No log | 4.0 | 428 | 9.0262 | 0.2471 | 0.44 | 0.2372 | 0.0808 | 0.2084 | 0.3213 | 0.2685 | 0.4722 | 0.5494 | 0.2294 | 0.4611 | 0.7006 | 0.5037 | 0.7387 | 0.147 | 0.5823 | 0.2081 | 0.4938 | 0.0661 | 0.4338 | 0.3107 | 0.4982 |
27.2158 | 5.0 | 535 | 8.6126 | 0.276 | 0.4857 | 0.2769 | 0.104 | 0.2124 | 0.3508 | 0.2741 | 0.4908 | 0.5667 | 0.3254 | 0.4704 | 0.6999 | 0.5335 | 0.7261 | 0.156 | 0.6139 | 0.2473 | 0.5156 | 0.0958 | 0.4523 | 0.3474 | 0.5253 |
27.2158 | 6.0 | 642 | 8.4669 | 0.2826 | 0.5022 | 0.2795 | 0.095 | 0.2191 | 0.3575 | 0.2766 | 0.4948 | 0.5704 | 0.3249 | 0.461 | 0.7107 | 0.5455 | 0.7356 | 0.1524 | 0.6215 | 0.2571 | 0.5125 | 0.1208 | 0.4646 | 0.3372 | 0.5178 |
27.2158 | 7.0 | 749 | 8.3188 | 0.3003 | 0.5202 | 0.2958 | 0.1107 | 0.2348 | 0.3887 | 0.2879 | 0.5044 | 0.5766 | 0.3707 | 0.4823 | 0.7142 | 0.5545 | 0.7288 | 0.2019 | 0.6329 | 0.2655 | 0.5263 | 0.1341 | 0.4631 | 0.3453 | 0.532 |
27.2158 | 8.0 | 856 | 8.3084 | 0.2972 | 0.5265 | 0.2912 | 0.107 | 0.2409 | 0.3732 | 0.2811 | 0.5029 | 0.5811 | 0.3453 | 0.4772 | 0.7138 | 0.5617 | 0.7351 | 0.1649 | 0.6367 | 0.2773 | 0.5192 | 0.1337 | 0.4769 | 0.3485 | 0.5378 |
27.2158 | 9.0 | 963 | 8.2764 | 0.3064 | 0.5313 | 0.3049 | 0.1068 | 0.2421 | 0.3871 | 0.284 | 0.5073 | 0.5802 | 0.373 | 0.4783 | 0.7108 | 0.5621 | 0.7333 | 0.182 | 0.6291 | 0.2765 | 0.5299 | 0.1631 | 0.4738 | 0.3483 | 0.5347 |
11.8005 | 10.0 | 1070 | 8.3132 | 0.3055 | 0.5372 | 0.3008 | 0.1094 | 0.241 | 0.3888 | 0.2863 | 0.4999 | 0.578 | 0.3884 | 0.4832 | 0.7099 | 0.5596 | 0.7275 | 0.2013 | 0.6342 | 0.2742 | 0.5192 | 0.146 | 0.4769 | 0.3462 | 0.5324 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for svetadomoi/rtdetr-v2-r34-cppe5-finetune-2
Base model
PekingU/rtdetr_v2_r34vd