Deep Learning for ARM using Simulink/Embedded Coder

조회 수: 1 (최근 30일)
Peter Balazovic
Peter Balazovic 2021년 12월 16일
편집: Peter Balazovic 2021년 12월 17일
I noticed that the Matlab example shows code generation which takes advantage of the ARM Compute library for deep learning by Simulink/Embedded Coder.
The questions are about
  • what version of the ARM Compute Library is supported or exact versions 19.05 and 20.02.1?
  • Is it dependent on the library version supported by embedded target which is already pre-built by vendor?
  • Is it able to run the models with ARM-NN which utilizes the Compute Library to on-chip execution unit?
  • Does codegen support additional (proprietary) libraries?
  • Can codegen utilize an already available python DNN interpreter or C++ interpreter which is available on-chip?
Thank you.

답변 (1개)

Nathan Malimban
Nathan Malimban 2021년 12월 16일
Hi Peter,
1. For 21b, the supported ARM Compute library versions are 19.02,19.05,20.02.1, and 20.11.
2. Just make sure that the version on the hardware is one of the ones compatible for your MATLAB release. For setting the library up on the hardware, see https://www.mathworks.com/matlabcentral/answers/455590-matlab-coder-how-do-i-build-the-arm-compute-library-for-deep-learning-c-code-generation-and-deplo.=
3. Today, we directly call into ARM-Compute library without using ARM-NN indirection as it does not provide any additional benefits for ARM Cortex A series processors. We’d be interested in learning how ARM-NN improves your deployment workflow, though.
4. For boards with ARM Cortex-M, codegen supports CMSIS-NN starting in 22a. For Intel CPUs, codegen supports MKL-DNN. For NVIDIA GPUs, codegen supports the CuDNN and TensorRT libraries.
5. We are supporting deployment of TFLite models in 22a.
  댓글 수: 6
Nathan Malimban
Nathan Malimban 2021년 12월 17일
5. Yes. In 22a, we will allow you to load a tflite network in MATLAB and generate code. The generated code leverages the tflite intepreter.
Peter Balazovic
Peter Balazovic 2021년 12월 17일
편집: Peter Balazovic 2021년 12월 17일
4.
Certainly, it could help with workflow. In this sense, I would assume to have a coder support package for i.MX RT (Cortex-M Processors) parts. Certain i.MX-RT devices have Cadance DSP. The TFL-Micro leverages a DSP-optimized implementation of various NN layers and low-level NN kernels. This DSP library focuses on the speech and audio neural network domain.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Deep Learning with GPU Coder에 대해 자세히 알아보기

제품


릴리스

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by