Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -17,7 +17,7 @@ tags:
|
|
| 17 |
# MobileNet-v3-Large: Optimized for Mobile Deployment
|
| 18 |
## Imagenet classifier and general purpose backbone
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
This model is an implementation of MobileNet-v3-Large found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
|
| 23 |
This repository provides scripts to run MobileNet-v3-Large on Qualcomm® devices.
|
|
@@ -37,7 +37,7 @@ More details on model performance across various devices, can be found
|
|
| 37 |
|
| 38 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
| ---|---|---|---|---|---|---|---|
|
| 40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.
|
| 41 |
|
| 42 |
|
| 43 |
## Installation
|
|
@@ -97,9 +97,9 @@ python -m qai_hub_models.models.mobilenet_v3_large.export
|
|
| 97 |
```
|
| 98 |
Profile Job summary of MobileNet-v3-Large
|
| 99 |
--------------------------------------------------
|
| 100 |
-
Device: Samsung Galaxy
|
| 101 |
-
Estimated Inference Time: 0.
|
| 102 |
-
Estimated Peak Memory Range: 0.
|
| 103 |
Compute Units: NPU (134) | Total (134)
|
| 104 |
|
| 105 |
|
|
@@ -219,7 +219,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
|
| 219 |
## License
|
| 220 |
- The license for the original implementation of MobileNet-v3-Large can be found
|
| 221 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 222 |
-
- The license for the compiled assets for on-device deployment can be found [here](
|
| 223 |
|
| 224 |
## References
|
| 225 |
* [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
|
|
|
|
| 17 |
# MobileNet-v3-Large: Optimized for Mobile Deployment
|
| 18 |
## Imagenet classifier and general purpose backbone
|
| 19 |
|
| 20 |
+
MobileNet-v3-Large is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
| 21 |
|
| 22 |
This model is an implementation of MobileNet-v3-Large found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
|
| 23 |
This repository provides scripts to run MobileNet-v3-Large on Qualcomm® devices.
|
|
|
|
| 37 |
|
| 38 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
| ---|---|---|---|---|---|---|---|
|
| 40 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.603 ms | 0 - 2 MB | FP16 | NPU | [MobileNet-v3-Large.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Large/blob/main/MobileNet-v3-Large.tflite)
|
| 41 |
|
| 42 |
|
| 43 |
## Installation
|
|
|
|
| 97 |
```
|
| 98 |
Profile Job summary of MobileNet-v3-Large
|
| 99 |
--------------------------------------------------
|
| 100 |
+
Device: Samsung Galaxy S24 (14)
|
| 101 |
+
Estimated Inference Time: 0.43 ms
|
| 102 |
+
Estimated Peak Memory Range: 0.01-57.22 MB
|
| 103 |
Compute Units: NPU (134) | Total (134)
|
| 104 |
|
| 105 |
|
|
|
|
| 219 |
## License
|
| 220 |
- The license for the original implementation of MobileNet-v3-Large can be found
|
| 221 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 222 |
+
- The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
|
| 223 |
|
| 224 |
## References
|
| 225 |
* [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
|