qaihm-bot commited on
Commit
8a434ec
·
verified ·
1 Parent(s): 313fc8b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -17
README.md CHANGED
@@ -36,32 +36,30 @@ More details on model performance across various devices, can be found
36
 
37
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  |---|---|---|---|---|---|---|---|---|
39
- | ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 3.442 ms | 0 - 138 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
40
- | ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 2.456 ms | 0 - 38 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
41
- | ConvNext-Tiny-w8a16-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 2.432 ms | 0 - 38 MB | INT8 | NPU | Use Export Script |
42
- | ConvNext-Tiny-w8a16-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 13.141 ms | 0 - 12 MB | INT8 | NPU | Use Export Script |
43
- | ConvNext-Tiny-w8a16-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 3.1 ms | 0 - 3 MB | INT8 | NPU | Use Export Script |
44
- | ConvNext-Tiny-w8a16-Quantized | SA7255P ADP | SA7255P | QNN | 26.846 ms | 0 - 10 MB | INT8 | NPU | Use Export Script |
45
- | ConvNext-Tiny-w8a16-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 3.108 ms | 0 - 3 MB | INT8 | NPU | Use Export Script |
46
- | ConvNext-Tiny-w8a16-Quantized | SA8295P ADP | SA8295P | QNN | 4.709 ms | 0 - 15 MB | INT8 | NPU | Use Export Script |
47
- | ConvNext-Tiny-w8a16-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 3.1 ms | 0 - 3 MB | INT8 | NPU | Use Export Script |
48
- | ConvNext-Tiny-w8a16-Quantized | SA8775P ADP | SA8775P | QNN | 4.465 ms | 0 - 10 MB | INT8 | NPU | Use Export Script |
49
- | ConvNext-Tiny-w8a16-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 4.289 ms | 0 - 41 MB | INT8 | NPU | Use Export Script |
50
- | ConvNext-Tiny-w8a16-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 3.381 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
51
 
52
 
53
 
54
 
55
  ## Installation
56
 
57
- This model can be installed as a Python package via pip.
58
 
 
59
  ```bash
60
- pip install "qai-hub-models[convnext_tiny_w8a16_quantized]"
61
  ```
62
 
63
 
64
-
65
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
66
 
67
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
@@ -113,7 +111,7 @@ ConvNext-Tiny-w8a16-Quantized
113
  Device : Samsung Galaxy S23 (13)
114
  Runtime : QNN
115
  Estimated inference time (ms) : 3.4
116
- Estimated peak memory usage (MB): [0, 138]
117
  Total # Ops : 215
118
  Compute Unit(s) : NPU (215 ops)
119
  ```
@@ -156,7 +154,8 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
156
 
157
 
158
  ## License
159
- * The license for the original implementation of ConvNext-Tiny-w8a16-Quantized can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE).
 
160
  * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
161
 
162
 
 
36
 
37
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  |---|---|---|---|---|---|---|---|---|
39
+ | ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 3.426 ms | 0 - 126 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
40
+ | ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 2.459 ms | 0 - 42 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
41
+ | ConvNext-Tiny-w8a16-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 2.445 ms | 0 - 44 MB | INT8 | NPU | Use Export Script |
42
+ | ConvNext-Tiny-w8a16-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 13.081 ms | 0 - 12 MB | INT8 | NPU | Use Export Script |
43
+ | ConvNext-Tiny-w8a16-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 3.088 ms | 0 - 4 MB | INT8 | NPU | Use Export Script |
44
+ | ConvNext-Tiny-w8a16-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 3.098 ms | 0 - 3 MB | INT8 | NPU | Use Export Script |
45
+ | ConvNext-Tiny-w8a16-Quantized | SA8295P ADP | SA8295P | QNN | 5.267 ms | 0 - 15 MB | INT8 | NPU | Use Export Script |
46
+ | ConvNext-Tiny-w8a16-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 3.113 ms | 0 - 3 MB | INT8 | NPU | Use Export Script |
47
+ | ConvNext-Tiny-w8a16-Quantized | SA8775P ADP | SA8775P | QNN | 4.498 ms | 0 - 10 MB | INT8 | NPU | Use Export Script |
48
+ | ConvNext-Tiny-w8a16-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 4.174 ms | 0 - 38 MB | INT8 | NPU | Use Export Script |
49
+ | ConvNext-Tiny-w8a16-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 3.393 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
 
50
 
51
 
52
 
53
 
54
  ## Installation
55
 
 
56
 
57
+ Install the package via pip:
58
  ```bash
59
+ pip install "qai-hub-models[convnext-tiny-w8a16-quantized]"
60
  ```
61
 
62
 
 
63
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
64
 
65
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
 
111
  Device : Samsung Galaxy S23 (13)
112
  Runtime : QNN
113
  Estimated inference time (ms) : 3.4
114
+ Estimated peak memory usage (MB): [0, 126]
115
  Total # Ops : 215
116
  Compute Unit(s) : NPU (215 ops)
117
  ```
 
154
 
155
 
156
  ## License
157
+ * The license for the original implementation of ConvNext-Tiny-w8a16-Quantized can be found
158
+ [here](https://github.com/pytorch/vision/blob/main/LICENSE).
159
  * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
160
 
161