dinov2-base-imagenet1k-1-layer-finetuned-galaxy_mnist

This model is a fine-tuned version of facebook/dinov2-base-imagenet1k-1-layer on the matthieulel/galaxy_mnist dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1875
  • Accuracy: 0.9365
  • Precision: 0.9367
  • Recall: 0.9365
  • F1: 0.9365

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.8401 0.99 31 0.6220 0.7605 0.7579 0.7605 0.7543
0.3857 1.98 62 0.2696 0.8875 0.8881 0.8875 0.8877
0.3144 2.98 93 0.2491 0.9015 0.9028 0.9015 0.9010
0.2769 4.0 125 0.2179 0.913 0.9129 0.913 0.9128
0.2858 4.99 156 0.2455 0.9025 0.9070 0.9025 0.9020
0.2704 5.98 187 0.2121 0.9155 0.9234 0.9155 0.9156
0.2557 6.98 218 0.2177 0.9155 0.9190 0.9155 0.9152
0.2069 8.0 250 0.1864 0.9255 0.9256 0.9255 0.9255
0.2344 8.99 281 0.1894 0.923 0.9237 0.923 0.9230
0.1996 9.98 312 0.1993 0.9235 0.9260 0.9235 0.9234
0.2011 10.98 343 0.1828 0.928 0.9280 0.928 0.9279
0.2229 12.0 375 0.2358 0.9155 0.9233 0.9155 0.9145
0.1792 12.99 406 0.1897 0.9205 0.9214 0.9205 0.9205
0.1898 13.98 437 0.2017 0.921 0.9217 0.921 0.9208
0.1735 14.98 468 0.1954 0.927 0.9270 0.927 0.9269
0.1751 16.0 500 0.1918 0.9295 0.9299 0.9295 0.9294
0.1732 16.99 531 0.1906 0.922 0.9225 0.922 0.9219
0.1738 17.98 562 0.1846 0.931 0.9317 0.931 0.9310
0.1694 18.98 593 0.1875 0.9365 0.9367 0.9365 0.9365
0.1723 20.0 625 0.1941 0.9285 0.9293 0.9285 0.9284
0.1574 20.99 656 0.1905 0.9335 0.9337 0.9335 0.9336
0.1485 21.98 687 0.1869 0.9315 0.9313 0.9315 0.9314
0.1537 22.98 718 0.1830 0.936 0.9360 0.936 0.9360
0.1406 24.0 750 0.1975 0.932 0.9322 0.932 0.9320
0.1326 24.99 781 0.1918 0.9315 0.9316 0.9315 0.9315
0.1238 25.98 812 0.2105 0.9275 0.9288 0.9275 0.9276
0.1299 26.98 843 0.2022 0.9325 0.9327 0.9325 0.9324
0.1387 28.0 875 0.2011 0.9335 0.9337 0.9335 0.9336
0.1279 28.99 906 0.2005 0.931 0.9310 0.931 0.9310
0.1256 29.76 930 0.2004 0.931 0.9310 0.931 0.9310

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.15.1
Downloads last month
2
Safetensors
Model size
86.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for matthieulel/dinov2-base-imagenet1k-1-layer-finetuned-galaxy_mnist

Finetuned
(8)
this model