| | --- |
| | license: apache-2.0 |
| | tags: |
| | - vision |
| | - image-classification |
| | datasets: |
| | - imagenet-1k |
| | widget: |
| | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg |
| | example_title: Tiger |
| | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg |
| | example_title: Teapot |
| | - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg |
| | example_title: Palace |
| | --- |
| | |
| | # Pyramid Vision Transformer (medium-sized model) |
| |
|
| | Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT). |
| |
|
| | Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S. [@Xrenya]](https://huggingface.co/Xrenya). |
| |
|
| | ## Model description |
| |
|
| | The Pyramid Vision Transformer (PVT) is a transformer encoder model (BERT-like) pretrained on ImageNet-1k (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. |
| |
|
| | Images are presented to the model as a sequence of variable-size patches, which are linearly embedded. Unlike ViT models, PVT is using a progressive shrinking pyramid to reduce computations of large feature maps at each stage. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. |
| |
|
| | By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. |
| |
|
| | ## Intended uses & limitations |
| |
|
| | You can use the raw model for image classification. See the [model hub](https://huggingface.co/Xrenya) to look for |
| | fine-tuned versions on a task that interests you. |
| |
|
| | ### How to use |
| |
|
| | Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: |
| |
|
| | ```python |
| | from transformers import PvtImageProcessor, PvtForImageClassification |
| | from PIL import Image |
| | import requests |
| | |
| | url = 'http://images.cocodataset.org/val2017/000000039769.jpg' |
| | image = Image.open(requests.get(url, stream=True).raw) |
| | |
| | processor = PvtImageProcessor.from_pretrained('Zetatech/pvt-medium-224') |
| | model = PvtForImageClassification.from_pretrained('Zetatech/pvt-medium-224') |
| | |
| | inputs = processor(images=image, return_tensors="pt") |
| | outputs = model(**inputs) |
| | logits = outputs.logits |
| | # model predicts one of the 1000 ImageNet classes |
| | predicted_class_idx = logits.argmax(-1).item() |
| | print("Predicted class:", model.config.id2label[predicted_class_idx]) |
| | ``` |
| |
|
| | For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/pvt.html#). |
| |
|
| | ## Training data |
| |
|
| | The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. |
| |
|
| | ## Training procedure |
| |
|
| | ### Preprocessing |
| |
|
| | The exact details of preprocessing of images during training/validation can be found [here](https://github.com/whai362/PVT/blob/v2/classification/datasets.py). |
| |
|
| | Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). |
| |
|
| |
|
| |
|
| | ### BibTeX entry and citation info |
| |
|
| | ```bibtex |
| | @inproceedings{wang2021pyramid, |
| | title={Pyramid vision transformer: A versatile backbone for dense prediction without convolutions}, |
| | author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling}, |
| | booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, |
| | pages={568--578}, |
| | year={2021} |
| | } |
| | ``` |