Define Custom Loss Function for Tabular Data

조회 수: 6 (최근 30일)
Ramiro
Ramiro 2024년 11월 23일
댓글: Matt J 2024년 11월 24일
Hi, I am trying to implement a custom neural network from the paper attached. But I got the next error:
Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.
I know that I cannot use extractdata but I could not realize other way to compute the loss.
The working code is:
%Generate sample data (1000 instances with 4 attributes)
Data = rand([1000,4]);
%Network layers hyperparameters
[nrow, ncol] = size(Data);
relu_scale = 0.1;
%Network architecture
layers = [
featureInputLayer(ncol,"Name","features")
fullyConnectedLayer(10,"Name","hidlayer01")
leakyReluLayer(relu_scale,"Name","leakyrelu")
fullyConnectedLayer(10,"Name","hidlaye02")
leakyReluLayer(relu_scale,"Name","leakyrelu_1")
fullyConnectedLayer(2,"Name","output")];
%Network initialization
net = dlnetwork(layers);
net = initialize(net);
%Clean up auxiliary variables
clear layers relu_scale myPath dataset;
%Training options for Adam Solver
numIterations = 1;
learningRate = 1e-4;
trailingAvg = [];
trailingAvgSq = [];
gradDecay = 0.9;
gradDecaySq = 0.99;
%Training options for SVDD
v = 0.10;
C = 1/(nrow*v);
lb = zeros(nrow,1);
ub = C*(lb+1);
%Convert data to dlarray
X = dlarray(Data','CB');
monitor = trainingProgressMonitor(Metrics = "Loss",XLabel = "Iteration");
iteration = 0;
%Loop over mini-batches.
while iteration < numIterations && ~monitor.Stop
iteration = iteration + 1;
%Evaluate the model loss and gradients using dlfeval and the modelLoss function
[loss,gradients] = dlfeval(@modelLoss,net,X,C,lb,ub,nrow);
%Update the network parameters.
[net,trailingAvg,trailingAvgSq] = adamupdate(net,gradients, ...
trailingAvg,trailingAvgSq,iteration,learningRate,gradDecay,gradDecaySq);
%Update the training progress monitor.
recordMetrics(monitor,iteration,Loss=loss);
monitor.Progress = 100 * iteration/numIterations;
end
function [loss,gradients] = modelLoss(net,X,C,lb,ub,nrow)
%The modelLoss function calculates the CDSVDD loss and returns the loss and the gradients
%of the loss with respect to the network learnable parameters
%Get phiX
phiX = forward(net,X);
%Compute loss
loss = CDSVDDLoss(phiX,C,lb,ub,nrow);
% Calculate gradients of the loss with respect to the network learnable parameters
gradients = dlgradient(loss,net.Learnables);
end
function loss = CDSVDDLoss(phiX,C,lb,ub,nrow)
%Convert phiX to array
phiX = double(transpose(extractdata(phiX)));
%Compute Q matrix
Q = phiX*phiX';
%Solve QP problem
alpha = quadprog(2*Q,-diag(Q),[],[],ones(1,nrow),1,lb,ub);
%Set to zero alpha values lower than optimality tolerance
alpha(alpha < 1.000000e-08) = 0;
%Compute sphere center
sphC = sum(alpha.*phiX,1);
%Compute distance of data points to sphere center
distC = pdist2(phiX,sphC);
%Compute sphere radius
sv = alpha > 0 & alpha < C;
sphR = mean(distC(sv));
%Compute the loss
loss = sphR + C*sum(max(distC-sphR,0));
loss = dlarray(loss);
end
I appreciate any help.
Regards
Ramiro
  댓글 수: 3
Ramiro
Ramiro 2024년 11월 24일
Hi, I appreciate a learning path to solve the problems. I am new in deep learning with matlab.
Regards
Ramiro
Matt J
Matt J 2024년 11월 24일
You will probably have to implement your own gradient calculation.

댓글을 달려면 로그인하십시오.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Directed Graphs에 대해 자세히 알아보기

제품


릴리스

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by