MathWorks - Mobile View
  • MathWorks 계정에 로그인합니다.MathWorks 계정에 로그인합니다.
  • Access your MathWorks Account
    • 내 계정
    • 나의 커뮤니티 프로필
    • 라이선스를 계정에 연결
    • 로그아웃
  • 제품
  • 솔루션
  • 아카데미아
  • 지원
  • 커뮤니티
  • 이벤트
  • MATLAB 다운로드
MathWorks
  • 제품
  • 솔루션
  • 아카데미아
  • 지원
  • 커뮤니티
  • 이벤트
  • MATLAB 다운로드
  • MathWorks 계정에 로그인합니다.MathWorks 계정에 로그인합니다.
  • Access your MathWorks Account
    • 내 계정
    • 나의 커뮤니티 프로필
    • 라이선스를 계정에 연결
    • 로그아웃

비디오 및 웨비나

  • MathWorks
  • 비디오
  • 비디오 홈
  • 검색
  • 비디오 홈
  • 검색
  • 영업 상담
  • 평가판 신청
  Register to watch video
  • Description
  • Full Transcript
  • Related Resources

Using Lookup Tables to Accelerate Deep Learning Inference

This video highlights the lookup table optimization capability to generate an efficient lookup table for a sigmoid function, which is a key activation function used in deep learning networks. We then compare the relative speedup on an Arduino Due® and STMicroelectronics® discovery board using the generated code for hardware in the loop simulation.

A lookup table is a key construct for embedded designs, and is often used to speed up the run-time execution of certain functions of your algorithm. For instance, complex trig functions are often replaced with a more efficient LUT implementation.

Let’s try a simple experiment – applying the same principle to the sigmoid function to investigate how we can accelerate the deep learning inference performance particularly on the edge.

The sigmoid function is a key building block for neural networks and is one of the commonly used nonlinear activation functions used in deep learning networks.

Here we have a simple Simulink subsystem that models the sigmoid function. I am going to use the Lookup Table Optimizer app to generate an optimal LUT, specifying the input and output data types. Since this is a bounded function, I can specify the bounds on the output and finally the tolerance on the output of 1%.

Once the optimization problem is solved, we can look at the comparison plot to verify that the error of the LUT approximation is within our specified tolerance.

Now as a next step, lets generate C code from the sigmoid function and the generated LUT and deploy it to a cortex M platform like the Arduino board.

We use hardware-in-the-loop simulation to run the generated code with inputs from Simulink. There is some overhead of running the code in this mode but this still gives us a good comparison of the relative execution speed.

As you can see from the execution profile, the LUT is 2.5 x faster on the Arduino. I repeated the same test on a Cortex M7 based STMicro discovery board. Here is a plot showing the relative speedup the lookup table with different data types.

In fact, this can scale up if you can share the lookup table approximation between all neurons, further decreasing the execution speed by orders of magnitude. You can do the same experiment with other activation functions like hyperbolic tangent.

To learn more about optimizing LUTs in your design, please refer to additional links below the video.

Related Products

  • Fixed-Point Designer
  • Embedded Coder

Learn More

What Is Quantization?
Calculate Complex dB Using a Direct Lookup Table
Reducing Memory Footprint of Lookup Tables in Your Design
Convert Digit Recognition Neural Network to Fixed-Point and Generate C Code

Feedback

Featured Product

Fixed-Point Designer

  • Request Trial
  • Get Pricing

Up Next:

51:27
Best practices for Converting MATLAB Code to Fixed Point...

Related Videos:

2:46
Fixed-Point Net Slope Computation Using Division
1:07:31
Generate C Code from MATLAB Functions Using the Embedded...
48:36
ARM Cortex-A, -R, -M Optimized Code Generation using MATLAB...
51:49
Smarter System Verification using Simulink and Simics

View more related videos

MathWorks - Domain Selector

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

Select web site

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
    • 简体中文Chinese
    • English
  • 日本Japanese (日本語)
  • 한국Korean (한국어)

Contact your local office

  • 영업 상담
  • 평가판 신청

제품 소개

  • MATLAB
  • Simulink
  • 학생용 소프트웨어
  • 하드웨어 지원
  • File Exchange

다운로드 및 구매

  • 다운로드
  • 평가판 신청
  • 영업 상담
  • 가격 및 라이선스
  • MathWorks 스토어

사용 방법

  • 문서
  • 튜토리얼
  • 예제
  • 비디오 및 웨비나
  • 교육

지원

  • 설치 도움말
  • 사용자 커뮤니티
  • 컨설팅
  • 라이선스 센터
  • 지원 문의

회사 정보

  • 채용
  • 뉴스 룸
  • 사회적 미션
  • 영업 상담
  • 회사 정보

MathWorks

Accelerating the pace of engineering and science

MathWorks는 엔지니어와 과학자들을 위한 테크니컬 컴퓨팅 소프트웨어 분야의 선도적인 개발업체입니다.

활용 분야 …

  • Select a Web Site United States
  • 특허
  • 등록 상표
  • 정보 취급 방침
  • 불법 복제 방지
  • 매스웍스코리아 유한회사
  • 주소: 서울시 강남구 삼성동 테헤란로 521 파르나스타워 14층
  • 전화번호: 02-6006-5100
  • 대표자 : 이종민
  • 사업자 등록번호 : 120-86-60062

© 1994-2021 The MathWorks, Inc.

  • Naver
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
  • RSS

대화에 참여하기