Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,108 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- image-feature-extraction
|
4 |
+
- birder
|
5 |
+
- pytorch
|
6 |
+
library_name: birder
|
7 |
+
license: apache-2.0
|
8 |
+
---
|
9 |
+
|
10 |
+
# Model Card for vit_reg4_m16_rms_avg_i-jepa
|
11 |
+
|
12 |
+
A ViT m16 RMS norm image encoder pre-trained using I-JEPA. This model has *not* been fine-tuned for a specific classification task and is intended to be used as a general-purpose feature extractor or a backbone for downstream tasks like object detection, segmentation, or custom classification.
|
13 |
+
|
14 |
+
## Model Details
|
15 |
+
|
16 |
+
- **Model Type:** Image classification and detection backbone
|
17 |
+
- **Model Stats:**
|
18 |
+
- Params (M): 38.3
|
19 |
+
- Input image size: 224 x 224
|
20 |
+
- **Dataset:** Trained on a diverse dataset of approximately 13.5M images (12M before sampling), including:
|
21 |
+
- iNaturalist 2021 (~2.6M) x1
|
22 |
+
- imagenet-1k-webp (~1.3M) x2
|
23 |
+
- COCO (~120K) x2
|
24 |
+
- VOC 2012 (~17K) x2
|
25 |
+
- CUB-200 2011 (~6K) x4
|
26 |
+
- The Birder dataset (~8M, private dataset) x1
|
27 |
+
|
28 |
+
- **Papers:**
|
29 |
+
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: <https://arxiv.org/abs/2010.11929>
|
30 |
+
- Vision Transformers Need Registers: <https://arxiv.org/abs/2309.16588>
|
31 |
+
- Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture: <https://arxiv.org/abs/2301.08243>
|
32 |
+
|
33 |
+
## Model Usage
|
34 |
+
|
35 |
+
### Image Embeddings
|
36 |
+
|
37 |
+
```python
|
38 |
+
import birder
|
39 |
+
from birder.inference.classification import infer_image
|
40 |
+
|
41 |
+
(net, model_info) = birder.load_pretrained_model("vit_reg4_m16_rms_avg_i-jepa", inference=True)
|
42 |
+
|
43 |
+
# Get the image size the model was trained on
|
44 |
+
size = birder.get_size_from_signature(model_info.signature)
|
45 |
+
|
46 |
+
# Create an inference transform
|
47 |
+
transform = birder.classification_transform(size, model_info.rgb_stats)
|
48 |
+
|
49 |
+
image = "path/to/image.jpeg" # or a PIL image
|
50 |
+
(out, embedding) = infer_image(net, image, transform, return_embedding=True)
|
51 |
+
# embedding is a NumPy array with shape of (1, 512)
|
52 |
+
```
|
53 |
+
|
54 |
+
### Detection Feature Map
|
55 |
+
|
56 |
+
```python
|
57 |
+
from PIL import Image
|
58 |
+
import birder
|
59 |
+
|
60 |
+
(net, model_info) = birder.load_pretrained_model("vit_reg4_m16_rms_avg_i-jepa", inference=True)
|
61 |
+
|
62 |
+
# Get the image size the model was trained on
|
63 |
+
size = birder.get_size_from_signature(model_info.signature)
|
64 |
+
|
65 |
+
# Create an inference transform
|
66 |
+
transform = birder.classification_transform(size, model_info.rgb_stats)
|
67 |
+
|
68 |
+
image = Image.open("path/to/image.jpeg")
|
69 |
+
features = net.detection_features(transform(image).unsqueeze(0))
|
70 |
+
# features is a dict (stage name -> torch.Tensor)
|
71 |
+
print([(k, v.size()) for k, v in features.items()])
|
72 |
+
# Output example:
|
73 |
+
# [('neck', torch.Size([1, 512, 14, 14]))]
|
74 |
+
```
|
75 |
+
|
76 |
+
## Citation
|
77 |
+
|
78 |
+
```bibtex
|
79 |
+
@misc{dosovitskiy2021imageworth16x16words,
|
80 |
+
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
|
81 |
+
author={Alexey Dosovitskiy and Lucas Beyer and Alexander Kolesnikov and Dirk Weissenborn and Xiaohua Zhai and Thomas Unterthiner and Mostafa Dehghani and Matthias Minderer and Georg Heigold and Sylvain Gelly and Jakob Uszkoreit and Neil Houlsby},
|
82 |
+
year={2021},
|
83 |
+
eprint={2010.11929},
|
84 |
+
archivePrefix={arXiv},
|
85 |
+
primaryClass={cs.CV},
|
86 |
+
url={https://arxiv.org/abs/2010.11929},
|
87 |
+
}
|
88 |
+
|
89 |
+
@misc{darcet2024visiontransformersneedregisters,
|
90 |
+
title={Vision Transformers Need Registers},
|
91 |
+
author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
|
92 |
+
year={2024},
|
93 |
+
eprint={2309.16588},
|
94 |
+
archivePrefix={arXiv},
|
95 |
+
primaryClass={cs.CV},
|
96 |
+
url={https://arxiv.org/abs/2309.16588},
|
97 |
+
}
|
98 |
+
|
99 |
+
@misc{assran2023selfsupervisedlearningimagesjointembedding,
|
100 |
+
title={Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture},
|
101 |
+
author={Mahmoud Assran and Quentin Duval and Ishan Misra and Piotr Bojanowski and Pascal Vincent and Michael Rabbat and Yann LeCun and Nicolas Ballas},
|
102 |
+
year={2023},
|
103 |
+
eprint={2301.08243},
|
104 |
+
archivePrefix={arXiv},
|
105 |
+
primaryClass={cs.CV},
|
106 |
+
url={https://arxiv.org/abs/2301.08243},
|
107 |
+
}
|
108 |
+
```
|