Deep Learning Network Quantization for Deployment to Embedded Targets
Quantization enables deploying semantic segmentation algorithms for Deep Learning Networks in limited resource targets. The deployment into Arm, FPGA, and GPU targets will be shown. The challenges of maintaining the accuracy of the network while reducing both the size of the network and the size of the memory needed will be explored.
- Deploying Deep Learning Networks on resource constrained targets
- Semantic segmentation example of trained network compression while preserving accuracy
- Generate code for deploying Deep Networks to Arm devices
About the Presenters
Greg is the product marketing manager for Fixed-Point Designer and Deep Learning Toolbox Model Quantization Library. He has experience in the development of embedded systems and product development in the semiconductor industry. He received an MBA from Worcester Polytechnic Institute, an M.S. in Electrical Engineering from the University of Massachusetts Lowell, and received a B.S. in Electrical Engineering from WPI.
Brenda Zhuang is a software engineering manager and leads a team that develops software tools for automatic deployment of embedded applications in microprocessors and FPGAs. Brenda has contributed to the development and evolution of many new features in the MATLAB and Simulink product families. She received her Ph.D. in Systems Engineering from Boston University and M.S. in Electrical and Electronics Engineering from Hong Kong University of Science and Technology.
Recorded: 27 Apr 2021
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.