mask2former-finetuned-ER-Mito-LD7
This model is a fine-tuned version of facebook/mask2former-swin-large-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:
- Loss: 32.5161
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 6450
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
49.96 | 1.0 | 129 | 38.5311 |
37.9944 | 2.0 | 258 | 33.0716 |
36.2103 | 3.0 | 387 | 29.7335 |
29.2814 | 4.0 | 516 | 31.5442 |
27.3516 | 5.0 | 645 | 28.0345 |
26.4039 | 6.0 | 774 | 27.9411 |
24.3475 | 7.0 | 903 | 26.4108 |
23.4804 | 8.0 | 1032 | 26.8475 |
22.4815 | 9.0 | 1161 | 25.9447 |
21.721 | 10.0 | 1290 | 27.1656 |
20.8443 | 11.0 | 1419 | 33.2659 |
19.9949 | 12.0 | 1548 | 26.9611 |
19.893 | 13.0 | 1677 | 26.0445 |
18.1322 | 14.0 | 1806 | 27.2854 |
18.1679 | 15.0 | 1935 | 25.4194 |
17.4814 | 16.0 | 2064 | 25.4006 |
17.402 | 17.0 | 2193 | 24.8677 |
17.0285 | 18.0 | 2322 | 25.9922 |
15.8946 | 19.0 | 2451 | 27.1687 |
15.8518 | 20.0 | 2580 | 29.3397 |
15.4202 | 21.0 | 2709 | 25.7427 |
14.9686 | 22.0 | 2838 | 28.8585 |
14.7436 | 23.0 | 2967 | 27.9649 |
15.1461 | 24.0 | 3096 | 27.5371 |
14.3666 | 25.0 | 3225 | 27.2910 |
13.9871 | 26.0 | 3354 | 28.4562 |
13.8003 | 27.0 | 3483 | 27.0616 |
13.7903 | 28.0 | 3612 | 33.0673 |
13.2151 | 29.0 | 3741 | 28.1574 |
13.2489 | 30.0 | 3870 | 27.9714 |
12.9787 | 31.0 | 3999 | 29.6233 |
12.8853 | 32.0 | 4128 | 32.2755 |
12.5442 | 33.0 | 4257 | 30.2798 |
13.3521 | 34.0 | 4386 | 28.7282 |
12.609 | 35.0 | 4515 | 27.4472 |
12.1436 | 36.0 | 4644 | 28.6240 |
12.0534 | 37.0 | 4773 | 28.3248 |
12.4731 | 38.0 | 4902 | 31.0330 |
12.0568 | 39.0 | 5031 | 33.3478 |
11.7165 | 40.0 | 5160 | 32.6755 |
11.7194 | 41.0 | 5289 | 32.9583 |
11.5118 | 42.0 | 5418 | 31.8171 |
11.2862 | 43.0 | 5547 | 30.4766 |
11.8368 | 44.0 | 5676 | 30.9541 |
11.2132 | 45.0 | 5805 | 32.4065 |
10.606 | 46.0 | 5934 | 31.5392 |
11.7442 | 47.0 | 6063 | 31.8038 |
10.7855 | 48.0 | 6192 | 32.5302 |
11.3661 | 49.0 | 6321 | 32.8178 |
10.9675 | 50.0 | 6450 | 32.5147 |
Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.4.1
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Dnq2025/mask2former-finetuned-ER-Mito-LD7
Base model
facebook/mask2former-swin-large-ade-semantic