Edit model card

ConvNext-Tiny-w8a16-Quantized: Optimized for Mobile Deployment

Imagenet classifier and general purpose backbone

ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This model is an implementation of ConvNext-Tiny-w8a16-Quantized found here.

This repository provides scripts to run ConvNext-Tiny-w8a16-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Image classification
  • Model Stats:
    • Model checkpoint: Imagenet
    • Input resolution: 224x224
    • Number of parameters: 28.6M
    • Model size: 28 MB
    • Precision: w8a16 (8-bit weights, 16-bit activations)
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
ConvNext-Tiny-w8a16-Quantized Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 3.585 ms 0 - 19 MB INT8 NPU ConvNext-Tiny-w8a16-Quantized.so
ConvNext-Tiny-w8a16-Quantized Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 2.618 ms 0 - 34 MB INT8 NPU ConvNext-Tiny-w8a16-Quantized.so
ConvNext-Tiny-w8a16-Quantized Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 2.456 ms 0 - 33 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized RB3 Gen 2 (Proxy) QCS6490 Proxy QNN 13.316 ms 0 - 8 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized QCS8550 (Proxy) QCS8550 Proxy QNN 3.176 ms 0 - 1 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized SA8255 (Proxy) SA8255P Proxy QNN 3.193 ms 0 - 2 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized SA8775 (Proxy) SA8775P Proxy QNN 3.188 ms 0 - 2 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized SA8650 (Proxy) SA8650P Proxy QNN 3.203 ms 0 - 2 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized SA8295P ADP SA8295P QNN 4.76 ms 0 - 6 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized QCS8450 (Proxy) QCS8450 Proxy QNN 4.253 ms 0 - 38 MB INT8 NPU Use Export Script
ConvNext-Tiny-w8a16-Quantized Snapdragon X Elite CRD Snapdragon® X Elite QNN 3.543 ms 0 - 0 MB INT8 NPU Use Export Script

Installation

This model can be installed as a Python package via pip.

pip install "qai-hub-models[convnext_tiny_w8a16_quantized]"

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.convnext_tiny_w8a16_quantized.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.export
Profiling Results
------------------------------------------------------------
ConvNext-Tiny-w8a16-Quantized
Device                          : Samsung Galaxy S23 (13)
Runtime                         : QNN                    
Estimated inference time (ms)   : 3.6                    
Estimated peak memory usage (MB): [0, 19]                
Total # Ops                     : 215                    
Compute Unit(s)                 : NPU (215 ops)          

Run demo on a cloud-hosted device

You can also run the demo on-device.

python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.demo --on-device

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.convnext_tiny_w8a16_quantized.demo -- --on-device

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on ConvNext-Tiny-w8a16-Quantized's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of ConvNext-Tiny-w8a16-Quantized can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support pytorch models for this pipeline type.