Fading Coder

One Final Commit for the Last Sprint

Home > Notes > Content

Deploying YOLOv5 on Rockchip RK3588 Using RKNN-Toolkit2

Notes May 15 1

Environment Configuration and Docker Setup

To begin setting up the deep learning environment for the RK3588 platform, start by preparing the host system. Docker provides a consistent and isolated environment for running the RKNN-Toolkit2. If Docker is not already installed, execute the following commands to install the necessary dependencies and the Docker engine.

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Once Docker is installed, load the specific RKNN-Toolkit2 image (version 2.1.0 in this context) and initiate a container. This container will serve as the workspace for model conversion and compilation.

docker load -i rknn-toolkit2-2.1.0-cp38-docker.tar
docker run -it -v $HOME:/home/host_dev rknn-toolkit2:2.1.0-cp38

Inside the Docker container, install essential build tools and download the cross-compilation toolchain. The toolchain is required to build the C++ demo application for the aarch64 architecture of the RK3588.

apt-get install cmake wget
export GCC_COMPILER=~/opt/gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu

Model Conversion with RKNN-Toolkit2

The core of the deployment process involves converting the standard YOLOv5 ONNX model into the RKNN format optimized for the Rockchip NPU. The RKNN-Toolkit2 facilitates this by handling model import, quantization, and graph construction. Below is a rewritten Python script, convert_to_rknn.py, which automates this process.

import sys
import os
from rknn.api import RKNN

CALIBRATION_DATASET = '../../../datasets/COCO/coco_subset_20.txt'
DEFAULT_RKNN_FILENAME = 'yolov5.rknn'

def onnx_to_rknn(onnx_model_path, target_platform, quantize_dtype='i8'):
    """Converts an ONNX model to RKNN format for a specific Rockchip platform."""
    rknn_engine = RKNN(verbose=True)

    # Configure preprocessing and target hardware
    print('--> Configuring RKNN model...')
    rknn_engine.config(
        mean_values=[[0, 0, 0]], 
        std_values=[[255, 255, 255]], 
        target_platform=target_platform
    )

    # Load the ONNX model
    print('--> Loading ONNX model from: {}'.format(onnx_model_path))
    ret = rknn_engine.load_onnx(model=onnx_model_path)
    if ret != 0:
        print('Error: Failed to load ONNX model.')
        sys.exit(ret)

    # Build the RKNN model with quantization
    print('--> Building RKNN model...')
    do_quantization = quantize_dtype in ['i8', 'u8']
    ret = rknn_engine.build(do_quantization=do_quantization, dataset=CALIBRATION_DATASET)
    if ret != 0:
        print('Error: Failed to build RKNN model.')
        sys.exit(ret)

    # Export the converted model
    print('--> Exporting RKNN model to: {}'.format(DEFAULT_RKNN_FILENAME))
    rknn_engine.export_rknn(DEFAULT_RKNN_FILENAME)
    
    # Release resources
    rknn_engine.release()

if __name__ == '__main__':
    if len(sys.argv) < 3:
        print(f"Usage: python {sys.argv[0]} [onnx_path] [platform] [dtype]")
        sys.exit(1)
    
    model_path = sys.argv[1]
    platform = sys.argv[2]
    dtype = sys.argv[3] if len(sys.argv) > 3 else 'i8'
    
    onnx_to_rknn(model_path, platform, dtype)

Execute this script within the Docker environment to generate the .rknn file. The script maps the ONNX operators to NPU-accelerated instructions and performs 8-bit quantization to improve inference speed on the embedded device.

python3 ./convert_to_rknn.py ../model/yolov5s_relu.onnx rk3588

Compiling the Deployment Executable

After successful conversion, the next step is to compile the C++ inference application that will run on the ELF2 board. The rknn_model_zoo repository provides the necessary source code and build scripts. Use the build script to cross-compile the demo for the aarch64 architecture.

cd /mnt/rknn_model_zoo-2.1.0
./build-linux.sh -t rk3588 -a aarch64 -d yolov5

Upon completion, the build process generates an executable and associated libraries. These artifacts are located in the install/rk3588_linux_aarch64/rknn_yolov5_demo directory. Package this directory into a compressed archive for transfer to the development board.

cd install/rk3588_linux_aarch64/
tar -zcvf rknn_yolov5_demo.tar.gz rknn_yolov5_demo/

Running Inference on the Target Device

Transfer the generated archive to the ELF2 development board using SCP or a similar file transfer method. Once transferred, extract the files and execute the binary to verify the deployment.

tar -zxvf rknn_yolov5_demo.tar.gz
cd rknn_yolov5_demo
chmod +x rknn_yolov5_demo
./rknn_yolov5_demo model/yolov5.rknn model/bus.jpg

The application will load the RKNN model, process the input image using the NPU, and output the detection results. The output confirms the model input/output tensor details and lists the detected objects with their coordinates and confidence scores, saving the visual result to out.png.

Tags: RK3588RKNN

Related Articles

Designing Alertmanager Templates for Prometheus Notifications

How to craft Alertmanager templates to format alert messages, improving clarity and presentation. Alertmanager uses Go’s text/template engine with additional helper functions. Alerting rules referenc...

Deploying a Maven Web Application to Tomcat 9 Using the Tomcat Manager

Tomcat 9 does not provide a dedicated Maven plugin. The Tomcat Manager interface, however, is backward-compatible, so the Tomcat 7 Maven Plugin can be used to deploy to Tomcat 9. This guide shows two...

Skipping Errors in MySQL Asynchronous Replication

When a replica halts because the SQL thread encounters an error, you can resume replication by skipping the problematic event(s). Two common approaches are available. Methods to Skip Errors 1) Skip a...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.