Intel FPGA AI Suite 使用自定义模型应用说明说明书

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

AN 993: Using Custom Models with Intel® FPGA AI Suite
Online Version777190
Contents Contents
1. Introduction (3)
2. Prerequisites (4)
3. Intel FPGA AI Suite Model Development Overview (5)
4. Custom Model Examples (7)
4.1. Example 1: Customized ResNet-18 Model (7)
4.2. Example 2: Customized Multilayer Perceptron (MLP) Model (8)
5. Common Errors When Using an Custom Model (10)
6. AN 993 Using Custom Models with Intel FPGA AI Suite Revision History (12)
1. Introduction
This application note outlines how to take a custom or unsupported model from a
supported framework and use with the OpenVINO™ toolkit and Intel® FPGA AI Suite.
The document briefly covers supported frameworks, layers, and common issues
encountered when using a custom or unsupported model.
This document also provides a step-by-step example of two different models. The first
model is ResNet18 with its last Fully Connected Layer removed. The second example
will show the addition of supported layers to an MLP model.
About the Intel FPGA AI Suite Documentation Library
Documentation for the Intel FPGA AI Suite is split across a few publications. Use the
following table to find the publication that contains the Intel FPGA AI Suite information
that you are looking for:
Table 1.Intel FPGA AI Suite Documentation Library
Title and Description
Release Notes
Provides late-breaking information about the Intel FPGA AI Suite including new features, important bug fixes, and known issues.Link
Getting Started Guide
Get up and running with the Intel FPGA AI Suite by learning how to initialize your compiler environment and
reviewing the various design examples and tutorials provided with the Intel FPGA AI Suite
Link
IP Reference Manual
Provides an overview of the Intel FPGA AI Suite IP and the parameters you can set to customize it. This
document also covers the Intel FPGA AI Suite IP generation utility.
Link
Compiler Reference Manual
Describes the use modes of the graph compiler (dla_compiler). It also provides details about the compiler command options and the format of compilation inputs and outputs.
Link
PCIe-based Design Example User Guide
Describes the design and implementation for accelerating AI inference using the Intel FPGA AI Suite, Intel
Distribution of OpenVINO Toolkit, and an Intel PAC with Intel Arria® 10 GX FPGA or a Terasic* DE10-Agilex Development Board.
Link
SoC-based Design Example User Guide
Describes the design and implementation for accelerating AI inference using the Intel FPGA AI Suite, Intel
Distribution of OpenVINO Toolkit, and an Intel Arria 10 SX SoC FPGA Development Kit.
Link
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any ISO
2. Prerequisites
The instructions in this application note assume that you have installed and configured
the Intel FPGA AI Suite according to the instructions in the Intel FPGA AI Suite Getting
Started Guide.
This application note was developed with the following hardware and software:
•Desktop computer:
— 1 free PCIe slot (for FPGA board)
—48GB RAM
—100GB SSD for faster storage
— A supported FPGA board:
•Intel Programmable Acceleration Card (PAC) with Intel Arria 10 GX FPGA
•Terasic DE10-Agilex Development Board
•Supported operating system:
—Ubuntu* 18.04 LTS
—Ubuntu 20.04 LTS
•Intel Distribution of OpenVINO Toolkit 2021.4.2 LTS
•Intel FPGA AI Suite 2023.1
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel
Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current
specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any ISO
3. Intel FPGA AI Suite Model Development Overview
Intel FPGA AI Suite was developed to simplify the development of artificial intelligence (AI) inference applications on Intel FPGA devices. Intel FPGA AI Suite facilitates the collaboration between software developers, ML engineers, and FPGA designers to create optimized FPGA AI platforms efficiently.
Utilities in Intel FPGA AI Suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and
OpenVINO toolkit, while also leveraging robust and proven FPGA development flows with Intel Quartus ® Prime software.
The Intel FPGA AI Suite tool flow works with OpenVINO toolkit, which is an open-source project to optimize inference on a variety of hardware architectures. OpenVINO toolkit takes deep learning models from all the major deep learning frameworks (such as TensorFlow, PyTorch, or Keras*) and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU-GPU combinations, and FPGA devices.
Figure 1.
Intel FPGA AI Suite Development Flow
Model(Software) Evaluation
Hardware Implementation
Model Optimizer
Inference Engine
The examples in this application note take you through using the OpenVINO Model Optimizer to convert the model to its intermediate representation (IR) and using the Intel FPGA AI Suite compiler . The Intel FPGA AI Suite compiler can do the following tasks:
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any ISO
•Compile the IR from OpenVINO Model Optimizer to an FPGA bitstream.•Estimate the performance of a graph or partition of a graph.•Estimate the FPGA area required by an architecture.

Generate an optimized architecture or an optimized architecture for a frame rate target value.
Intel FPGA AI Suite can support custom models that use the following frameworks:•TensorFlow 1•TensorFlow 2•PyTorch •Keras •ONNX*•Caffe •
MXNet*
While Intel FPGA AI Suite supports these frameworks, it does not support every layer type. The following table lists some of the supported layers:
Fully Connected 2D Convolution Depthwise
Scale-Shift Deconvolution Transpose Convolution ReLU pReLU Leaky ReLU Clamp H-Sigmoid H-Swish Max Pool
Average Pool
Softmax
For a complete list of supported layers, refer to “Intel FPGA AI Suite Layer / Primitive Ranges” in the Intel FPGA AI Suite IP Reference Manual .
You can run layers that are not supported by Intel FPGA AI Suite by transferring data between the FPGA device and another supported device such as CPU or GPU. If your goal is to fully port an AI model to an FPGA device, you might need to consider a performance tradeoff from switching devices for processing.Related Information •Intel FPGA AI Suite Compiler Reference Manual •
Intel FPGA AI Suite IP Reference Manual
3. Intel FPGA AI Suite Model Development Overview
777190 | 2023.05.01
4. Custom Model Examples
This section contains the following examples of using a custom model with Intel FPGA AI Suite:•Example 1: Customized ResNet-18 Model on page 7This example removes a layer from a ResNet 18 model.

Example 2: Customized Multilayer Perceptron (MLP) Model on page 8This example adds a supported layer to an MLP model.
4.1. Example 1: Customized ResNet-18 Model
This example removes the last Fully Connected (FC) layer to test if a performance difference exists. The ResNet-18 model is supported by Intel FPGA AI Suite, but
removing the last layer has not been tested. This example does not show how to test the modified model.
The removal is shown as an example only. The performance of this customized model has not been tested or optimized.Model information:•Model: ResNet-18•
Framework: Caffe
777190 | 2023.05.01
Send Feedback
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any ISO
Figure 2.ResNet-18 Fully Connected (FC) Layer
layer {
bottom: "pool5" top: "fc1000" name: "fc1000"
type: "InnerProduct" param {
lr_mult: 1 decay_mult: 1 }
param {
lr_mult: 2 decay_mult: 1 }
inner_product_param { num_output: 1000 weight_filler { type: "xavier" }
bias_filler {
type: "constant" value: 0 } }}
layer {
bottom: "fc1000" name: "prob" type: "Softmax" top: "prob"}
This code example shows that the Fully Connected layer was removed from the model.This removal is done by deleting the lines of code from the PROTOTXT (.prototxt )file.
The modified PROTOTXT file is then used to generate the OpenVINO intermediate
representation (IR) form of the model. To generate the IR for the modified model, run the following command:
mo_caffe.py --input_model <path to model>.caffemodel --input_proto <path to .prototxt>
This command runs the OpenVINO Model Optimizer and creates three files that are the IR of this customized model: deploy.xml , deploy.mapping , and deploy.bin .
4.2. Example 2: Customized Multilayer Perceptron (MLP) Model
This example adds layers to a simple Multilayer Perceptron (MLP) model as follows:• A ReLU layer was added after each linear transformation in the previous layer .•
A Softmax layer was added at the end.
These additions are shown as an example only. The performance of this customized model has not been tested or optimized.Model Information:•Model: Multilayer Perception (MLP)•
Framework: PyTorch/ONNX
4. Custom Model Examples
777190 | 2023.05.01
Figure 3.Original MLP Model Layers
import argparse import torch
import numpy as np
from torch import nn, onnx
class MLP(nn.Module): def __init__(self):
super(MLP, self).__init__() self.model = nn.Sequential( nn.Linear(10, 128), nn.Linear(128, 80), nn.Linear(80, 10), )
def forward(self, x): return self.model(x)
Figure 4.Modified MLP Model Layers
import argparse import torch
import numpy as np
from torch import nn, onnx import os
class MLP(nn.Module): def __init__(self):
super(MLP, self).__init__() self.model = nn.Sequential( nn.Linear(10, 128), nn.ReLU(),
nn.Linear(128, 80), nn.ReLU(),
nn.Linear(80, 10), nn.ReLU(), )
def forward(self, x): return self.model(x)
This model is created with the PyTorch framework but must be converted to ONNX to use the model with the OpenVINO Model Optimizer . The following Python code example illustrates how you can convert the PyTorch model to ONNX:
onnx.export(model, x, args.onnx_file, export_params=True)
For more information about converting PyTorch to ONNX, review the ONNX exporter documentation at the following URL:
https:///docs/stable/onnx.html#example-alexnet-from-pytorch-to-onnx After the conversion is complete and the ONNX model is saved, convert the model to OpenVINO IR with the following command:
mo --input_model <path to model>.onnx
4. Custom Model Examples 777190 | 2023.0
5.01
5. Common Errors When Using an Custom Model
When you use a custom model with Intel FPGA AI Suite, you might get one of these errors when generating the OpenVINO IR for the model.Shape Not Fully Defined Error
A “shape not fully defined” error looks like the following message:
[ ERROR ] Shape [ -1 299 299 3] is not fully defined for output 0 of "serving_default_input_1".
Use --input_shape with positive integers to override model input shapes.
For instructions on how to fix this error , review the documentation at the following URL:
https://docs.openvino.ai/latest/
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html The follow Model Optimizer command is an example to fix this error:
mo --input_model <model path>.onnx --input_shape [1,299,299,3] --input <input layer>
Unsupported Layer Type Error
An “unsupported layer type” error looks like the following message:
[ ERROR ] Failed to compile layer "<number of layer>": unsupported layer type "<layer name>"
To correct this error , implement one of the following fixes:•Use a model with supported layers.
•Modify the model to remove the unsupported layer or replace it.•
Run the layer in parallel on a supported device
Running the layer in parallel on a supported device requires you to enable
heterogeneous execution in OpenVINO. For details, refer to the following URL:https://docs.openvino.ai/latest/openvino_docs_OV_UG_Hetero_execution.html The following table shows examples of enabling heterogeneous execution in OpenVINO:
777190 | 2023.05.01
Send Feedback
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any ISO
5. Common Errors When Using an Custom Model
777190 | 2023.05.01
Send Feedback AN 993: Using Custom Models with Intel ® FPGA AI Suite
11
6. AN 993 Using Custom Models with Intel FPGA AI Suite Revision History Document Version
Changes 2023.05.01
Initial release.777190 | 2023.05.01
Send Feedback
Intel Corporation. All rights reserved. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Intel. Intel customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services.*Other names and brands may be claimed as the property of others.ISO 9001:2015Registered。

相关文档
最新文档