Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.

조회 수: 25 (최근 30일)
Hello, I am working on customizing the loss function to minimize dimensionality by maximizing the Bhattacharyya distance distance.
But it came up with a error shown as:
Error using dlarray/dlgradient
Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.
Error in mutiatt_nld>modelGradients (line 54)
gradients = dlgradient(loss, dlnet.Learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in deep.internal.dlfevalWithNestingCheck (line 15)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in dlfeval (line 31)
[varargout{1:nargout}] = deep.internal.dlfevalWithNestingCheck(fun,varargin{:});
Error in mutiatt_nld (line 24)
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
The code:
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, :) = X(1:N, :) + 1; % Data for class A
X(N+1:end, :) = X(N+1:end, :) - 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, 'Normalization', 'none')
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X', 'CB'); % Transpose input data to match network's expected format
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp(['Epoch ' num2str(epoch) ', Loss: ' num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test', 'CB'); % Transpose input data to match network's expected format
Y_test = predict(dlnet, dlX_test);
disp('Dimensionality reduction results during testing:');
disp(extractdata(Y_test)');
% Custom loss function
function loss = customLoss(Y, N)
YA = extractdata(Y(:, 1:N))';
YB = extractdata(Y(:, N+1:end))';
muA = mean(YA);
muB = mean(YB);
covA = cov(YA);
covB = cov(YB);
covMean = (covA + covB) / 2;
d = 0.25 * (muA - muB) / covMean * (muA - muB)' + 0.5 * log(det(covMean) / sqrt(det(covA) * det(covB)));
loss = -d; % Maximize Bhattacharyya distance
loss = dlarray(loss); % Ensure loss is a tracked dlarray scalar
end
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, dlX, N)
Y = forward(dlnet, dlX);
loss = customLoss(Y, N);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param - learnRate * grad;
end

답변 (1개)

Ganesh
Ganesh 2024년 6월 12일
You are getting this error as you are using "extractData" with a traced argument. This would lead to breaking of tracing. You may use the following documentation to know what norms need to be followed within your loss function:
I have tried your code with a different loss function, and this worked:
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, :) = X(1:N, :) + 1; % Data for class A
X(N+1:end, :) = X(N+1:end, :) - 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, 'Normalization', 'none')
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X', 'CB'); % Transpose input data to match network's expected format
Y = forward(dlnet, dlX);
Yact = rand(size(Y));
[gradients, loss] = dlfeval(@modelGradients, dlnet, Y, Yact);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp(['Epoch ' num2str(epoch) ', Loss: ' num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test', 'CB'); % Transpose input data to match network's expected format
Y_test = predict(dlnet, dlX_test);
disp('Dimensionality reduction results during testing:');
disp(extractdata(Y_test)');
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, Y, Yact)
loss = mse(Y, Yact);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param - learnRate * grad;
end
I understand that as your loss function is unsupervised, you are running into issues. You will have to refactor the code accordingly.
Hope this helps!
  댓글 수: 1
yingyu jiang
yingyu jiang 2024년 6월 13일
thank you very much!
But I did not run your demonstration program successfully and encountered an error message:
Dot indexing is not supported for variables of this type.
Error in deep.internal.recording.containerfeval>iTablesToCells (line 460)
args{input} = args{input}.Value;
Error in deep.internal.recording.containerfeval>iProcessNetwork_Nout_Nin (line 358)
iterableArgs = iTablesToCells(iterableArgs);
Error in deep.internal.recording.containerfeval>iDispatch_Nout_Nin (line 194)
outputs = iProcessNetwork_Nout_Nin(fun, paramFun, numOut, ...
Error in deep.internal.recording.containerfeval (line 38)
outputs = iDispatch_Nout_Nin(allowNetInput, fun, paramFun, numOut, ...
Error in deep.internal.networkContainerFixedArgsFun (line 29)
varargout = deep.internal.recording.containerfeval(...
Error in dlupdate (line 124)
[varargout{1:nargout}] = deep.internal.networkContainerFixedArgsFun(...
Error in mutiatt_nld (line 29)
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Custom Training Loops에 대해 자세히 알아보기

태그

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by