ConvNext-Tiny: Optimized for Mobile Deployment
Imagenet classifier and general purpose backbone
ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of ConvNext-Tiny found here.
This repository provides scripts to run ConvNext-Tiny on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Model_use_case.image_classification
- Model Stats:
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 28.6M
- Model size (float): 109 MB
- Model size (w8a16): 28.9 MB
Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
ConvNext-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 14.189 ms | 0 - 99 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 15.195 ms | 1 - 99 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 7.848 ms | 0 - 110 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 9.689 ms | 1 - 112 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 3.01 ms | 0 - 484 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.697 ms | 0 - 36 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 4.279 ms | 0 - 99 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 5.121 ms | 1 - 99 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 14.189 ms | 0 - 99 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 15.195 ms | 1 - 99 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.989 ms | 0 - 486 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.729 ms | 1 - 25 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 8.863 ms | 0 - 98 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 9.372 ms | 1 - 103 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.991 ms | 0 - 490 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.698 ms | 0 - 20 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 4.279 ms | 0 - 99 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 5.121 ms | 1 - 99 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.952 ms | 0 - 448 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.719 ms | 0 - 16 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 3.145 ms | 0 - 83 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 2.204 ms | 0 - 107 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.619 ms | 1 - 106 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.257 ms | 0 - 112 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.949 ms | 0 - 102 MB | NPU | ConvNext-Tiny.tflite |
ConvNext-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.324 ms | 0 - 104 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 2.039 ms | 1 - 101 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.141 ms | 397 - 397 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 3.03 ms | 57 - 57 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 6.73 ms | 0 - 59 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 4.242 ms | 0 - 69 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.138 ms | 0 - 16 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 3.488 ms | 0 - 59 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 12.513 ms | 0 - 106 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 6.73 ms | 0 - 59 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.121 ms | 0 - 17 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 4.616 ms | 0 - 62 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.144 ms | 0 - 14 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 3.488 ms | 0 - 59 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.127 ms | 0 - 17 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 82.564 ms | 204 - 356 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.145 ms | 23 - 96 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 65.694 ms | 224 - 580 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.863 ms | 0 - 62 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 59.251 ms | 217 - 546 MB | NPU | ConvNext-Tiny.onnx |
ConvNext-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 3.53 ms | 194 - 194 MB | NPU | ConvNext-Tiny.dlc |
ConvNext-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 91.518 ms | 232 - 232 MB | NPU | ConvNext-Tiny.onnx |
Installation
Install the package via pip:
pip install qai-hub-models
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.convnext_tiny.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.convnext_tiny.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.convnext_tiny.export
Profiling Results
------------------------------------------------------------
ConvNext-Tiny
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 14.2
Estimated peak memory usage (MB): [0, 99]
Total # Ops : 328
Compute Unit(s) : npu (328 ops) gpu (0 ops) cpu (0 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the jit.trace
and then call the submit_compile_job
API.
import torch
import qai_hub as hub
from qai_hub_models.models.convnext_tiny import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
Step 2: Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Run demo on a cloud-hosted device
You can also run the demo on-device.
python -m qai_hub_models.models.convnext_tiny.demo --eval-mode on-device
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.convnext_tiny.demo -- --eval-mode on-device
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on ConvNext-Tiny's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of ConvNext-Tiny can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
- Downloads last month
- 51