이 제출물을 팔로우합니다
- 팔로우하는 게시물 피드에서 업데이트를 확인할 수 있습니다
- 정보 수신 기본 설정에 따라 이메일을 받을 수 있습니다
Deep Learning Toolbox Model Compression Library enables compression of your deep learning models with pruning, projection, and quantization to reduce their memory footprint and computational requirements.
Pruning and projection are structural compression techniques that reduce the size of deep neural networks by removing learnables and filters that have the smallest impact on inference accuracy.
Quantization to 8-bit integers (INT8) is supported for CPUs, FPGAs, and NVIDIA GPUs, for supported layers. The library enables you to collect layer-level data on the weights, activations, and intermediate computations. Using this data, the library quantizes your model and provides metrics to validate the accuracy of the quantized network against the single precision baseline. The iterative workflow allows you to optimize the quantization strategy.
As of R2024b, you can export quantized networks to Simulink deep learning layer blocks for simulation and deployment to embedded systems.
Please refer to the documentation here: https://www.mathworks.com/help/deeplearning/quantization.html
Quantization Workflow Prerequisites can be found here:
If you have download or installation problems, please contact Technical Support - www.mathworks.com/contact_ts
Additional Resources
- Learn more about MATLAB and Simulink for tinyML
- Quantization Aware Training (QAT) with MobileNet-v2 (Example, GitHub Repo)
- Overview Video - https://www.youtube.com/watch?v=jufOpBeSvHM
MATLAB 릴리스 호환 정보
- R2020a에서 R2026a까지의 릴리스와 호환
플랫폼 호환성
- Windows
- macOS (Apple Silicon)
- macOS (Intel)
- Linux
