windowz_test-020525-1

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Model Preparation Time: 0.001
  • Accuracy: 0.9855
  • F1: 0.9845
  • Iou: 0.9719
  • Contour Dice: 0.9827
  • Per Class Metrics: {0: {'f1': 0.99421, 'iou': 0.98849, 'accuracy': 0.99132, 'contour_dice': 0.99421}, 1: {'f1': 0.97069, 'iou': 0.94305, 'accuracy': 0.98574, 'contour_dice': 0.97069}, 2: {'f1': 0.57177, 'iou': 0.40033, 'accuracy': 0.99387, 'contour_dice': 0.57177}}
  • Loss: 0.1455

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 2

Training results

Training Loss Epoch Step Model Preparation Time Dice Class Metrics Validation Loss
1.0553 0.1001 513 0.001 0.2042 0.4026 {0: {'f1': 0.00261, 'iou': 0.00131, 'accuracy': 0.25279, 'contour_dice': 0.00261}, 1: {'f1': 0.91168, 'iou': 0.83769, 'accuracy': 0.95417, 'contour_dice': 0.91168}, 2: {'f1': 0.0098, 'iou': 0.00492, 'accuracy': 0.27529, 'contour_dice': 0.0098}} 1.0723
0.9452 0.2002 1026 0.001 0.7779 0.7181 {0: {'f1': 0.92874, 'iou': 0.86697, 'accuracy': 0.88625, 'contour_dice': 0.92874}, 1: {'f1': 0.69461, 'iou': 0.53211, 'accuracy': 0.88238, 'contour_dice': 0.69461}, 2: {'f1': 0.05695, 'iou': 0.02931, 'accuracy': 0.98261, 'contour_dice': 0.05695}} 0.8706
0.8477 0.3002 1539 0.001 0.8084 0.7673 {0: {'f1': 0.93867, 'iou': 0.88442, 'accuracy': 0.90292, 'contour_dice': 0.93867}, 1: {'f1': 0.74567, 'iou': 0.59448, 'accuracy': 0.89835, 'contour_dice': 0.74567}, 2: {'f1': 0.43351, 'iou': 0.27674, 'accuracy': 0.99005, 'contour_dice': 0.43351}} 0.4034
0.8114 0.4003 2052 0.001 0.7795 0.7144 {0: {'f1': 0.92943, 'iou': 0.86817, 'accuracy': 0.88683, 'contour_dice': 0.92943}, 1: {'f1': 0.69334, 'iou': 0.53062, 'accuracy': 0.88266, 'contour_dice': 0.69334}, 2: {'f1': 0.24887, 'iou': 0.14212, 'accuracy': 0.9898, 'contour_dice': 0.24887}} 0.5827
0.7701 0.5004 2565 0.001 0.9032 0.9078 {0: {'f1': 0.97215, 'iou': 0.94582, 'accuracy': 0.95722, 'contour_dice': 0.97215}, 1: {'f1': 0.88981, 'iou': 0.8015, 'accuracy': 0.95011, 'contour_dice': 0.88981}, 2: {'f1': 0.23167, 'iou': 0.13101, 'accuracy': 0.99155, 'contour_dice': 0.23167}} 0.3269
0.7462 0.6005 3078 0.001 0.9018 0.9034 {0: {'f1': 0.97101, 'iou': 0.94366, 'accuracy': 0.9554, 'contour_dice': 0.97101}, 1: {'f1': 0.88956, 'iou': 0.80109, 'accuracy': 0.95028, 'contour_dice': 0.88956}, 2: {'f1': 0.28195, 'iou': 0.16411, 'accuracy': 0.99182, 'contour_dice': 0.28195}} 0.3012
0.7083 0.7005 3591 0.001 0.8870 0.8875 {0: {'f1': 0.96677, 'iou': 0.93567, 'accuracy': 0.94869, 'contour_dice': 0.96677}, 1: {'f1': 0.86981, 'iou': 0.76962, 'accuracy': 0.94193, 'contour_dice': 0.86981}, 2: {'f1': 0.08311, 'iou': 0.04336, 'accuracy': 0.99081, 'contour_dice': 0.08311}} 0.3264
0.6957 0.8006 4104 0.001 0.8885 0.8861 {0: {'f1': 0.96671, 'iou': 0.93556, 'accuracy': 0.94848, 'contour_dice': 0.96671}, 1: {'f1': 0.87151, 'iou': 0.77229, 'accuracy': 0.94332, 'contour_dice': 0.87151}, 2: {'f1': 0.25672, 'iou': 0.14726, 'accuracy': 0.99154, 'contour_dice': 0.25672}} 0.3388
0.6415 0.9007 4617 0.001 0.9228 0.9283 {0: {'f1': 0.97759, 'iou': 0.95616, 'accuracy': 0.96585, 'contour_dice': 0.97759}, 1: {'f1': 0.91553, 'iou': 0.84422, 'accuracy': 0.96085, 'contour_dice': 0.91553}, 2: {'f1': 0.45503, 'iou': 0.29452, 'accuracy': 0.99296, 'contour_dice': 0.45503}} 0.2154
0.66 1.0008 5130 0.001 0.9150 0.9178 {0: {'f1': 0.97496, 'iou': 0.95115, 'accuracy': 0.96161, 'contour_dice': 0.97496}, 1: {'f1': 0.90465, 'iou': 0.82589, 'accuracy': 0.95679, 'contour_dice': 0.90465}, 2: {'f1': 0.509, 'iou': 0.34138, 'accuracy': 0.99326, 'contour_dice': 0.509}} 0.2575
0.627 1.1009 5643 0.001 0.9406 0.9518 {0: {'f1': 0.98469, 'iou': 0.96984, 'accuracy': 0.97676, 'contour_dice': 0.98469}, 1: {'f1': 0.93594, 'iou': 0.87959, 'accuracy': 0.96989, 'contour_dice': 0.93594}, 2: {'f1': 0.32331, 'iou': 0.19282, 'accuracy': 0.99207, 'contour_dice': 0.32331}} 0.2241
0.6033 1.2009 6156 0.001 0.9191 0.9237 {0: {'f1': 0.97668, 'iou': 0.95443, 'accuracy': 0.96429, 'contour_dice': 0.97668}, 1: {'f1': 0.91018, 'iou': 0.83517, 'accuracy': 0.95907, 'contour_dice': 0.91018}, 2: {'f1': 0.42756, 'iou': 0.27191, 'accuracy': 0.99276, 'contour_dice': 0.42756}} 0.2139
0.6268 1.3010 6669 0.001 0.9675 0.9773 {0: {'f1': 0.99254, 'iou': 0.98518, 'accuracy': 0.98877, 'contour_dice': 0.99254}, 1: {'f1': 0.96583, 'iou': 0.93391, 'accuracy': 0.98358, 'contour_dice': 0.96583}, 2: {'f1': 0.59851, 'iou': 0.42705, 'accuracy': 0.9941, 'contour_dice': 0.59851}} 0.1872
0.5698 1.4011 7182 0.001 0.9597 0.9709 {0: {'f1': 0.99051, 'iou': 0.98119, 'accuracy': 0.98568, 'contour_dice': 0.99051}, 1: {'f1': 0.95785, 'iou': 0.91911, 'accuracy': 0.97981, 'contour_dice': 0.95785}, 2: {'f1': 0.46278, 'iou': 0.30105, 'accuracy': 0.99323, 'contour_dice': 0.46278}} 0.1653
0.5933 1.5012 7695 0.001 0.9349 0.9416 {0: {'f1': 0.98167, 'iou': 0.96399, 'accuracy': 0.97209, 'contour_dice': 0.98167}, 1: {'f1': 0.93053, 'iou': 0.87009, 'accuracy': 0.96767, 'contour_dice': 0.93053}, 2: {'f1': 0.45029, 'iou': 0.29056, 'accuracy': 0.99309, 'contour_dice': 0.45029}} 0.1594
0.6071 1.6012 8208 0.001 0.9719 0.9827 {0: {'f1': 0.99421, 'iou': 0.98849, 'accuracy': 0.99132, 'contour_dice': 0.99421}, 1: {'f1': 0.97069, 'iou': 0.94305, 'accuracy': 0.98574, 'contour_dice': 0.97069}, 2: {'f1': 0.57177, 'iou': 0.40033, 'accuracy': 0.99387, 'contour_dice': 0.57177}} 0.1455
0.5867 1.7013 8721 0.001 0.9567 0.9657 {0: {'f1': 0.9889, 'iou': 0.97805, 'accuracy': 0.98323, 'contour_dice': 0.9889}, 1: {'f1': 0.95391, 'iou': 0.91189, 'accuracy': 0.97813, 'contour_dice': 0.95391}, 2: {'f1': 0.59149, 'iou': 0.41994, 'accuracy': 0.99415, 'contour_dice': 0.59149}} 0.1466
0.5937 1.8014 9234 0.001 0.9305 0.9356 {0: {'f1': 0.97999, 'iou': 0.96076, 'accuracy': 0.96946, 'contour_dice': 0.97999}, 1: {'f1': 0.92376, 'iou': 0.85832, 'accuracy': 0.96491, 'contour_dice': 0.92376}, 2: {'f1': 0.55434, 'iou': 0.38345, 'accuracy': 0.99389, 'contour_dice': 0.55434}} 0.2816
0.6021 1.9015 9747 0.001 0.9137 0.9154 {0: {'f1': 0.97439, 'iou': 0.95005, 'accuracy': 0.96068, 'contour_dice': 0.97439}, 1: {'f1': 0.90445, 'iou': 0.82557, 'accuracy': 0.95681, 'contour_dice': 0.90445}, 2: {'f1': 0.46077, 'iou': 0.29935, 'accuracy': 0.99314, 'contour_dice': 0.46077}} 0.1777

Framework versions

  • Transformers 4.45.0
  • Pytorch 2.5.1+cu124
  • Datasets 2.21.0
  • Tokenizers 0.20.3
Downloads last month
0
Safetensors
Model size
544k params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.