How to access the INT8 quantized weights in the deep learning quantizer?

조회 수: 15 (최근 30일)
Yousef
Yousef 2025년 10월 14일 11:03
답변: Dor Rubin 2025년 10월 14일 14:47
I have quantized Resnet 18 using the deeplearning quantizer toolbox. The idea is that I want to deploy it on an FPGA. The quantized process was successful and the model size was compressed to 10MB. However, I want to see the quantized INT8 weights. How do do I access it in the terminal? I can only see the floating point values.
Below is my code and the output in the terminal:
code
% Save the network temporarily to calculate size
save('quantizedNet.mat', 'quantizedNet');
fileInfo = dir('quantizedNet.mat');
netSizeMB = fileInfo.bytes / (1024^2);
fprintf('Quantized Network Size: %.2f MB\n', netSizeMB);
%% Network archieture view
%%analyzeNetwork(quantizedNet)
%% Quantizer details
% Choose layer and parameter
Layers=qDetails.QuantizedLearnables
conv1weight=Layers.QuantizedLearnables.Value(1)
conv1weight
output

답변 (1개)

Dor Rubin
Dor Rubin 2025년 10월 14일 14:47
Hi Yousef,
You can access the integer representation by using the storedInteger method on the fi value. For example:
storedInteger(conv1weight)
Thanks,
Dor

카테고리

Help CenterFile Exchange에서 Quantization, Projection, and Pruning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by