Deep Learning Toolbox Model Quantization Library enables quantization and compression of your deep learning models to reduce the memory footprint and computational requirements of your deep neural network.
Quantization to INT8 is supported for CPUs, FPGAs, and NVIDIA GPUs, for supported layers. The library enables you to collect layer level data on the weights, activations, and intermediate computations. Using this data, the library quantizes your model and provides metrics to validate the accuracy of the quantized network against the single precision baseline. The iterative workflow allows you to optimize the quantization strategy.
The library also supports pruning which reduces network size by removing network elements that have the smallest impact on inference accuracy.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country
sites are not optimized for visits from your location.