royleibov commited on
Commit
98da32a
·
verified ·
1 Parent(s): 29c3524

Add zipnn text

Browse files
Files changed (1) hide show
  1. README.md +58 -4
README.md CHANGED
@@ -2,10 +2,60 @@
2
  tags:
3
  - vision
4
  widget:
5
- - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
 
6
  candidate_labels: playing music, playing sports
7
  example_title: Cat & Dog
 
 
 
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  # Model Card: CLIP
10
  Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
11
 
@@ -34,8 +84,12 @@ The original implementation had two variants: one using a ResNet image encoder a
34
  from PIL import Image
35
  import requests
36
  from transformers import CLIPProcessor, CLIPModel
37
- model = CLIPModel.from_pretrained("openai/clip-vit-base-patch16")
38
- processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch16")
 
 
 
 
39
  url = "http://images.cocodataset.org/val2017/000000039769.jpg"
40
  image = Image.open(requests.get(url, stream=True).raw)
41
  inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
@@ -123,4 +177,4 @@ We also tested the performance of CLIP on gender, race and age classification us
123
 
124
 
125
  ### Where to send questions or comments about the model
126
- Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
 
2
  tags:
3
  - vision
4
  widget:
5
+ - src: >-
6
+ https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
7
  candidate_labels: playing music, playing sports
8
  example_title: Cat & Dog
9
+ license: mit
10
+ base_model:
11
+ - openai/clip-vit-base-patch16
12
  ---
13
+ # Disclaimer and Requirements
14
+
15
+ This model is a clone of [**openai/clip-vit-base-patch32**](https://huggingface.co/openai/clip-vit-base-patch32) compressed using ZipNN. Compressed losslessly to 60% its original size, ZipNN saved ~0.3GB in storage and potentially ~9PB in data transfer **monthly**.
16
+
17
+ ### Requirement
18
+
19
+ In order to use the model, ZipNN is necessary:
20
+ ```bash
21
+ pip install zipnn
22
+ ```
23
+ ### Use This Model
24
+ ```python
25
+ # Use a pipeline as a high-level helper
26
+ from transformers import pipeline
27
+ from zipnn import zipnn_hf
28
+
29
+ zipnn_hf()
30
+
31
+
32
+ pipe = pipeline("zero-shot-image-classification", model="royleibov/clip-vit-base-patch16-ZipNN-Compressed")
33
+ ```
34
+ ```python
35
+ # Load model directly
36
+ from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
37
+ from zipnn import zipnn_hf
38
+
39
+ zipnn_hf()
40
+
41
+ processor = AutoProcessor.from_pretrained("royleibov/clip-vit-base-patch16-ZipNN-Compressed")
42
+ model = AutoModelForZeroShotImageClassification.from_pretrained("royleibov/clip-vit-base-patch16-ZipNN-Compressed")
43
+ ```
44
+ ### ZipNN
45
+ ZipNN also allows you to seemlessly save local disk space in your cache after the model is downloaded.
46
+
47
+ To compress the cached model, simply run:
48
+ ```bash
49
+ python zipnn_compress_path.py bin --model royleibov/clip-vit-base-patch16-ZipNN-Compressed --hf_cache
50
+ ```
51
+
52
+ The model will be decompressed automatically and safely as long as `zipnn_hf()` is added at the top of the file like in the [example above](#use-this-model).
53
+
54
+ To decompress manualy, simply run:
55
+ ```bash
56
+ python zipnn_decompress_path.py --model royleibov/clip-vit-base-patch16-ZipNN-Compressed --hf_cache
57
+ ```
58
+
59
  # Model Card: CLIP
60
  Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
61
 
 
84
  from PIL import Image
85
  import requests
86
  from transformers import CLIPProcessor, CLIPModel
87
+ from zipnn import zipnn_hf
88
+
89
+ zipnn_hf()
90
+
91
+ model = CLIPModel.from_pretrained("royleibov/clip-vit-base-patch16-ZipNN-Compressed")
92
+ processor = CLIPProcessor.from_pretrained("royleibov/clip-vit-base-patch16-ZipNN-Compressed")
93
  url = "http://images.cocodataset.org/val2017/000000039769.jpg"
94
  image = Image.open(requests.get(url, stream=True).raw)
95
  inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
 
177
 
178
 
179
  ### Where to send questions or comments about the model
180
+ Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)