timm
/

Image Classification
timm
PyTorch
Safetensors
Transformers
rwightman HF Staff commited on
Commit
dc60d0a
·
verified ·
1 Parent(s): a6c79bc

Update model config and README

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -2,7 +2,6 @@
2
  tags:
3
  - image-classification
4
  - timm
5
- - transformers
6
  library_name: timm
7
  license: apache-2.0
8
  datasets:
@@ -27,7 +26,7 @@ There are a number of models in the lower end of model scales that originate in
27
  | betwixt | 640 | 2560 (4) | 10 | 12 | y |
28
  | base | 768 | 3072 (4) | 12 | 12 | n |
29
  | so150m2 | 832 | 2176 (2.57) | 13 | 21 | y |
30
- | so150 | 896 | 2304 (2.62) | 14 | 18 | y |
31
 
32
  Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below.
33
 
@@ -141,8 +140,9 @@ output = model.forward_head(output, pre_logits=True)
141
  ## Model Comparison
142
  | model | top1 | top5 | param_count | img_size |
143
  | -------------------------------------------------- | ------ | ------ | ----------- | -------- |
144
- | [vit_so150m2_patch16_reg1_gap_384.sbb_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_so150m2_patch16_reg1_gap_384.sbb_e200_in12k_ft_in1k) | 87.930 | 98.502 | 136.33 | 384 |
145
- | [vit_so150m2_patch16_reg1_gap_256.sbb_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_so150m2_patch16_reg1_gap_256.sbb_e200_in12k_ft_in1k) | 87.308 | 98.326 | 136.33 | 256 |
 
146
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 87.438 | 98.256 | 64.11 | 384 |
147
  | [vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k) | 86.608 | 97.934 | 64.11 | 256 |
148
  | [vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 86.594 | 98.02 | 60.4 | 384 |
@@ -190,4 +190,4 @@ output = model.forward_head(output, pre_logits=True)
190
  journal={ICLR},
191
  year={2021}
192
  }
193
- ```
 
2
  tags:
3
  - image-classification
4
  - timm
 
5
  library_name: timm
6
  license: apache-2.0
7
  datasets:
 
26
  | betwixt | 640 | 2560 (4) | 10 | 12 | y |
27
  | base | 768 | 3072 (4) | 12 | 12 | n |
28
  | so150m2 | 832 | 2176 (2.57) | 13 | 21 | y |
29
+ | so150m | 896 | 2304 (2.62) | 14 | 18 | y |
30
 
31
  Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below.
32
 
 
140
  ## Model Comparison
141
  | model | top1 | top5 | param_count | img_size |
142
  | -------------------------------------------------- | ------ | ------ | ----------- | -------- |
143
+ | [vit_so150m2_patch16_reg1_gap_448.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_so150m2_patch16_reg1_gap_448.sbb2_e200_in12k_ft_in1k) | 88.068 | 98.588 | 136.33 | 448 |
144
+ | [vit_so150m2_patch16_reg1_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_so150m2_patch16_reg1_gap_384.sbb2_e200_in12k_ft_in1k) | 87.930 | 98.502 | 136.33 | 384 |
145
+ | [vit_so150m2_patch16_reg1_gap_256.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_so150m2_patch16_reg1_gap_256.sbb2_e200_in12k_ft_in1k) | 87.308 | 98.326 | 136.33 | 256 |
146
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 87.438 | 98.256 | 64.11 | 384 |
147
  | [vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k) | 86.608 | 97.934 | 64.11 | 256 |
148
  | [vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 86.594 | 98.02 | 60.4 | 384 |
 
190
  journal={ICLR},
191
  year={2021}
192
  }
193
+ ```