필터 지우기
필터 지우기

How can I freeze layers for training a network with multiple outputs and reduce time for the training?

조회 수: 24 (최근 30일)
Since I try to train the model with multiple outputs, I have an issue to use 'deepNetworkDesigner' and its Training option.
So, I follow the direct training method on this example with freezing net's layers using the code below
lgraph = net.layerGraph;
target = 290
for i = 1 : target
try
L = freezeWeights(lgraph.Layers(i));
lgraph = replaceLayer(lgraph,lgraph.Layers(i).Name,L);
catch
end
end
net = dlnetwork(lgraph)
I checked WeightLearnRateFactor and BiasLearnRateFactor become zero, which means the layers are frozen.
However, it still takes too much time on the stage of training.
So,
Q1: is this the right way to freeze layers for the training multiple output network?
Q2: How can I reduce time for training, ignoring the layers which are frozen.
Here is the base code from example that I used for network training
[loss,gradients,state] = dlfeval(@modelLoss,net,X,T1,T2);
function [loss,gradients,state] = modelLoss(net,X,T1,T2)
[Y1,Y2,state] = forward(net,X,Outputs=["softmax" "fc_2"]);
lossLabels = crossentropy(Y1,T1);
lossAngles = mse(Y2,T2);
loss = lossLabels + 0.1*lossAngles;
gradients = dlgradient(loss,net.Learnables);
end

채택된 답변

Aniketh
Aniketh 2023년 7월 12일
Yeah this a right way to freeze the layers, as long as you are not seeing any drastic changes in the output it shouldn't be any different for the case of predicting multiple outputs.
To further reduce training time and ignore the frozen layers, you can modify the modelLoss function to only compute and backpropagate gradients for the unfrozen layers using the find function in matlab. Here's a sample code for how this will work:
function [loss, gradients, state] = modelLoss(net, X, T1, T2)
[Y1, Y2, state] = forward(net, X, Outputs = ["softmax" "fc_2"]);
lossLabels = crossentropy(Y1, T1);
lossAngles = mse(Y2, T2);
loss = lossLabels + 0.1 * lossAngles;
% Compute gradients only for unfrozen layers
unfrozenLayers = find([net.Layers.WeightLearnRateFactor] > 0);
gradients = dlgradient(loss, net.Learnables(unfrozenLayers));
end
  댓글 수: 1
Brian Park
Brian Park 2023년 7월 15일
to test it i freezed last layers (from 253 to 255)
unfrozen_Learnables = [net.Learnables(253,:); net.Learnables(254,:); net.Learnables(255,:)];
gradients = dlgradient(loss, unfrozen_Learnables);
and get the error when running the code in modelLoss function
gradients = dlgradient(loss, unfrozen_layer);
>>
Error using +
Arrays have incompatible sizes for this operation.
Error in + (line 39)
zdata = matlab.lang.internal.move(xdata) + matlab.lang.internal.move(ydata);
...
Error in adamupdate (line 152)
[p, avg_g, avg_gsq] = deep.internal.networkContainerFixedArgsFun(func, ...
Training works well but slow when using just net.Learnables instead of specified unfrozen_Learnable.
'net.Learnables' is a 256x3 table with dlarray value
and 'unfrozen_Learnables' is a 6x3 table with dlarray value..
Do you know the difference between 'net.Learnables' and 'unfrozen_Learnables'?

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by