Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Converting Open-Source Framework Models to Ascend Models Using Orange Pi AI Pro

Tech May 14 1

Introduction to Model Conversion for Ascend AI Processors

When developing AI inference applications on Orange Pi AI Pro, it is necessary to convert original network models (such as those from PyTorch, TensorFlow, or Caffe) into the .om format compatible with Ascend hardware. This conversion enables the use of Ascend Computing Language (ACL) interfaces like aclmdlExecute for efficient model inference.

The Ascend Tensor Compiler (ATC) is the primary tool for this conversion. It supports direct conversion from Caffe, ONNX, TensorFlow, and MindSpore models. For PyTorch models, an intermediate export to ONNX via torch.onnx.export is required before using ATC.

Overview of ATC Tool

ATC transforms models from open-source frameworks into optimized .om files tailored for Ascend AI processors. The conversion process includes operator scheduling, data reorganization, and memory optimization to enhance execution performance.

Key steps in conversion:

  • Parsing the source model into an intermediate representation (IR Graph).
  • Performing graph preparation, partitioning, optimization, and compilation.
  • Generating a final .om file for inference via AscendCL APIs.

Basic Usage Example

Below is an example of converting a Caffe ResNet-50 model using ATC:

atc --framework=0 --soc_version=${soc_version} \
--model=$HOME/model/resnet50.prototxt \
--weight=$HOME/model/resnet50.caffemodel \
--output=$HOME/output/resnet50_ascend

Parameters explained:

  • --framework: Source framework (0 for Caffe).
  • --soc_version: Ascend processor version (use npu-smi info too find).
  • --model: Path to model definition file.
  • --weight: Path to model weights (Caffe only).
  • --output: Output path for .om file.

Success is indicated by an "ATC run success" message.

Advanced ATC Features

1. Model to JSON Conversion

Convert models to JSON for inspection:

atc --mode=1 --om=$HOME/model/resnet50.om --json=$HOME/output/model_info.json

2. Custom Input/Output Data Types

Specify data types and output nodes:

atc --framework=0 --soc_version=${soc_version} \
--model=$HOME/model/resnet50.prototxt \
--weight=$HOME/model/resnet50.caffemodel \
--output=$HOME/output/resnet50_custom \
--input_fp16_nodes="data" \
--out_nodes="pool1:0" \
--output_type="pool1:0:FP16"

3. Dynamic Batch Size and Resolution

Support for variable batch sizes and image dimensions:

atc --framework=0 --soc_version=${soc_version} \
--model=$HOME/model/resnet50.prototxt \
--weight=$HOME/model/resnet50.caffemodel \
--output=$HOME/output/resnet50_dynamic \
--input_shape="data:-1,3,224,224" \
--dynamic_batch_size="1,2,4,8"

Use -1 in input_shape to denote dynamic dimensions, with supported values specified in dynamic_batch_size or dynamic_image_size.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.