You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

RAI 1.5 ESRGAN Inference (C++)

A C++ command-line tool for running ESRGAN (Enhanced Super-Resolution Generative Adversarial Networks) inference using ONNX Runtime with support for both CPU and AMD NPU (Neural Processing Unit) execution via VitisAI Execution Provider.

Overview

This tool performs image super-resolution using pre-trained ESRGAN models, taking low-resolution images as input and producing high-resolution upscaled outputs. It supports execution on both CPU and AMD Ryzen AI NPUs for accelerated inference.

This is the optimized version for Ryzen AI 1.5. It includes timers for the user to measure inference latency

By default the inference will first be run once on cpu before running on NPU.

Required Files

  • ONNX Model: ESRGAN model file (esrgan_wint8_auint8_bint8_pot_npu_nhwc.onnx)
  • Config JSON: VitisAI configuration file (vitisai_config.json)
  • Runtime DLLs: Ryzen AI runtime libraries (automatically copied from Ryzen AI SDK during build)
  • XCLBIN Files: NPU binary files (automatically copied from Ryzen AI SDK during build)
  • OpenCV Dependencies: OpenCV DLLs (included in opencv/build/x64/vc16/bin/)

Building the Project

The build process automatically copies the required OpenCV and RyzenAI dlls to the executable directory, post-build.

It is assumed that the user has the Ryzen 1.5 AI SDK in the path C:\Program Files\RyzenAI\1.5.0

Prerequisites for Building

It is recommend to use the Developer Command Prompt for Visual Studio 2022.

From the command prompt, set the RYZEN_AI_INSTALLATION_PATH

set RYZEN_AI_INSTALLATION_PATH=C:\Program Files\RyzenAI\1.5.0

The executable is built by running the included compile.bat script

compile.bat

After the build is complete the executable (esrgan_inference.exe) will be found in the .\build\Release directory

cd into this directory

cd build\Release

Usage

Command-Line Syntax

esrgan_inference.exe [OPTIONS]

Required Arguments

  • -m, --model <file> : ONNX model filename (relative to executable directory)
  • -c, --config <file> : JSON configuration filename (relative to executable directory)

Optional Arguments

  • -i, --input_image <string> : Input image file (default: ..\..\input_image.png)
  • -o, --output_image <string> : Output image file (default: output_image.png),
  • -n, --iters <int> : Output image file (default: 1),
  • -k, --cache_key <string> : Cache key for VitisAI EP (default: empty)
  • -d, --cache_dir <string> : Cache directory for VitisAI EP (default: empty)
  • -x, --xclbin <string> : XCLBIN filename for NPU (default: auto-selected)
  • -h, --help : Show help message

Example Usage

esrgan_inference.exe -m esrgan_wint8_auint8_bint8_pot_npu_nhwc.onnx -c vitisai_config.json -d . -k esrgan_cache -i ..\..\input_image.png -o output_image.png ^

This example demonstrates:

  • Using model and config files from the build directory (automatically copied)
  • Relative paths for input images (from project root)
  • Store the model cache in a folder in the current directory named esrgan_cache

Note: The esrgan_cache/ directory is created automatically during the first NPU inference run if the enable_cache_file_io_in_mem is set to 0 in the vai_ep_options If this option isn't set, the cache will be stored in memory only and the model will be recompiled every time the application is run, which can be slow.

The esrgan_cache directory contains compiled model artifacts that significantly speed up subsequent runs. You can point to this existing cache after it has been created using the -d and -k parameters.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including amd/RyzenAI-1.5-ESRGAN-Inference-ryzen-strix-cpp