timm
/

Image Feature Extraction
timm
PyTorch
Safetensors
Transformers
rwightman HF Staff commited on
Commit
26789ab
·
verified ·
1 Parent(s): 98cc1d7
Files changed (4) hide show
  1. README.md +194 -0
  2. config.json +39 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ - transformers
6
+ pipeline_tag: image-classification
7
+ library_name: timm
8
+ license: other
9
+ license_name: dinov3-license
10
+ license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
11
+ datasets:
12
+ - lvd-1689m
13
+ ---
14
+ # Model card for vit_small_plus_patch16_dinov3_qkvb.lvdm_1689m
15
+
16
+ A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model.
17
+
18
+ ## Model Notes
19
+ * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models.
20
+ * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
21
+
22
+ ## Model Details
23
+ - **Model Type:** Image feature encoder
24
+ - **Model Stats:**
25
+ - Params (M): 28.7
26
+ - GMACs: 8.1
27
+ - Activations (M): 21.8
28
+ - Image size: 256 x 256
29
+ - **Original:** https://github.com/facebookresearch/dinov3
30
+ - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license)
31
+ - **Dataset:** LVD-1689M
32
+ - **Papers:**
33
+ - DINOv3: https://arxiv.org/abs/2508.10104
34
+ - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
35
+ - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
36
+
37
+ ## Model Usage
38
+ ### Image Classification
39
+ ```python
40
+ from urllib.request import urlopen
41
+ from PIL import Image
42
+ import timm
43
+
44
+ img = Image.open(urlopen(
45
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
46
+ ))
47
+
48
+ model = timm.create_model('vit_small_plus_patch16_dinov3_qkvb.lvdm_1689m', pretrained=True)
49
+ model = model.eval()
50
+
51
+ # get model specific transforms (normalization, resize)
52
+ data_config = timm.data.resolve_model_data_config(model)
53
+ transforms = timm.data.create_transform(**data_config, is_training=False)
54
+
55
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
56
+
57
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
58
+ ```
59
+
60
+ ### Feature Map Extraction
61
+ ```python
62
+ from urllib.request import urlopen
63
+ from PIL import Image
64
+ import timm
65
+
66
+ img = Image.open(urlopen(
67
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
68
+ ))
69
+
70
+ model = timm.create_model(
71
+ 'vit_small_plus_patch16_dinov3_qkvb.lvdm_1689m',
72
+ pretrained=True,
73
+ features_only=True,
74
+ )
75
+ model = model.eval()
76
+
77
+ # get model specific transforms (normalization, resize)
78
+ data_config = timm.data.resolve_model_data_config(model)
79
+ transforms = timm.data.create_transform(**data_config, is_training=False)
80
+
81
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
82
+
83
+ for o in output:
84
+ # print shape of each feature map in output
85
+ # e.g.:
86
+ # torch.Size([1, 384, 16, 16])
87
+ # torch.Size([1, 384, 16, 16])
88
+ # torch.Size([1, 384, 16, 16])
89
+
90
+ print(o.shape)
91
+ ```
92
+
93
+ ### Image Embeddings
94
+ ```python
95
+ from urllib.request import urlopen
96
+ from PIL import Image
97
+ import timm
98
+
99
+ img = Image.open(urlopen(
100
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
101
+ ))
102
+
103
+ model = timm.create_model(
104
+ 'vit_small_plus_patch16_dinov3_qkvb.lvdm_1689m',
105
+ pretrained=True,
106
+ num_classes=0, # remove classifier nn.Linear
107
+ )
108
+ model = model.eval()
109
+
110
+ # get model specific transforms (normalization, resize)
111
+ data_config = timm.data.resolve_model_data_config(model)
112
+ transforms = timm.data.create_transform(**data_config, is_training=False)
113
+
114
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
115
+
116
+ # or equivalently (without needing to set num_classes=0)
117
+
118
+ output = model.forward_features(transforms(img).unsqueeze(0))
119
+ # output is unpooled, a (1, 261, 384) shaped tensor
120
+
121
+ output = model.forward_head(output, pre_logits=True)
122
+ # output is a (1, num_features) shaped tensor
123
+ ```
124
+
125
+ ## Model Comparison
126
+ See the associated paper for details on the evaluation protocols
127
+
128
+ ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)
129
+
130
+ | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair |
131
+ |-------|---------|------|---------|-------|--------|------|-------|------|-------|
132
+ | **Global Tasks** | | | | | **Dense Tasks** | | | | |
133
+ | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 |
134
+ | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 |
135
+ | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 |
136
+ | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 |
137
+ | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 |
138
+ | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 |
139
+
140
+ ### Results for ConvNeXt backbones distilled on web (LVD-1689M)
141
+
142
+ | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ |
143
+ |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------|
144
+ | **Global Tasks** | | | | | | | **Dense Tasks** | |
145
+ | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 |
146
+ | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 |
147
+ | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 |
148
+ | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 |
149
+
150
+ ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)
151
+
152
+ #### (GEO-Bench) Classification
153
+
154
+ | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean |
155
+ |-------|---------|--------------|-----------|-------------|----------|----------|------|
156
+ | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 |
157
+ | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 |
158
+
159
+ #### (GEO-Bench) Segmentation
160
+
161
+ | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean |
162
+ |-------|----------|--------------|------------|-------------|--------------|-----------|------|
163
+ | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 |
164
+ | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 |
165
+
166
+ ## Citation
167
+ ```bibtex
168
+ @article{simeoni2025dinov3,
169
+ title={DINOv3},
170
+ author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others},
171
+ journal={arXiv preprint arXiv:2508.10104},
172
+ year={2025}
173
+ }
174
+ }
175
+ ```
176
+ ```bibtex
177
+ @article{dosovitskiy2020vit,
178
+ title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
179
+ author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
180
+ journal={ICLR},
181
+ year={2021}
182
+ }
183
+ ```
184
+ ```bibtex
185
+ @misc{rw2019timm,
186
+ author = {Ross Wightman},
187
+ title = {PyTorch Image Models},
188
+ year = {2019},
189
+ publisher = {GitHub},
190
+ journal = {GitHub repository},
191
+ doi = {10.5281/zenodo.4414861},
192
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
193
+ }
194
+ ```
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "vit_small_plus_patch16_dinov3_qkvb",
3
+ "num_classes": 0,
4
+ "num_features": 384,
5
+ "global_pool": "avg",
6
+ "pretrained_cfg": {
7
+ "tag": "lvdm_1689m",
8
+ "custom_load": false,
9
+ "input_size": [
10
+ 3,
11
+ 256,
12
+ 256
13
+ ],
14
+ "min_input_size": [
15
+ 3,
16
+ 128,
17
+ 128
18
+ ],
19
+ "fixed_input_size": false,
20
+ "interpolation": "bicubic",
21
+ "crop_pct": 1.0,
22
+ "crop_mode": "center",
23
+ "mean": [
24
+ 0.485,
25
+ 0.456,
26
+ 0.406
27
+ ],
28
+ "std": [
29
+ 0.229,
30
+ 0.224,
31
+ 0.225
32
+ ],
33
+ "num_classes": 0,
34
+ "pool_size": null,
35
+ "first_conv": "patch_embed.proj",
36
+ "classifier": "head",
37
+ "license": "dinov3"
38
+ }
39
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b4d2e3482f63f349b74e34475d1f3d3450aa05838fb3d38bf74bbde8e4d535a
3
+ size 114789032
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10697496d696582eeb0e94077fdc2715a4b0e7a7b09509072ef04eb2b60c2b28
3
+ size 114844562