Error using gpuArray/reshape (Number of element must not change)

조회 수: 5 (최근 30일)
Rd
Rd 2020년 12월 3일
I have refer the following file exchange for dual input layer CNN.
Kenta (2020). Image Classification using CNN with Multi Input 複数の入力層を持つCNN (https://www.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input-cnn).
I have training and testing images for 50 classes. I need to fuse the features learned from part1 and part2 images.
my input size is 240*320.
There is an error in reshape the concatenated features. Kindly help me to resolve the errors. code and error is attatced below.
I attatched the code here.
%Two layers for deep learning called dlnet1 and dlnet2 were prepared. The operation of the CNN follows:
%The input images in the part1 were convoluted and the information was forwarded with a function forward
%The same operation was done with the part2 images and the features were aggregated
%The aggregated features were processed with some fully connected layers called dlnet3
%The cross entropy loss was calculated based on the labels and the output from the soft max layer
%Back-propagete the loss and update the weights and bias in the dlnet3
%Update the parameters in the dlnet1
%Update the parameters in the dlnet2
imagefolder = 'C:\Users\study\PG\PROJECT\Training';
imds = imageDatastore(imagefolder, ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[XTrain, YTrain] = imds2array(imds);
XTrain1=XTrain(:,:,:,1:300); % extract first part
XTrain2=XTrain(:,:,:,301:600);% extract second part
classes = categories(YTrain);% retrieve the class information with the type of categorical
numClasses = numel(classes);
%Display the examples
%Display the some training images randomly from the upper and down part, respectively. To show the tiled image, use montage.
dispIdx=randi(50,[10 1]);
dispX1=XTrain1(:,:,:,dispIdx);
figure;montage(dispX1)
dispX2=XTrain2(:,:,:,dispIdx);
figure;montage(dispX2)
%Define Network
%The dlnet1, 2 and 3 are created in this section. To allow you to follow the flow of this example, this process was done with a helper function located at the end of this script.
numHiddenDimension=50;
dlnet1=createLayer(XTrain1,numHiddenDimension);
dlnet2=createLayer(XTrain2,numHiddenDimension);
dlnet3=createLayerFullyConnect(numHiddenDimension);
%Specify the training options
velocity1 = [];velocity2 = [];velocity3 = [];
numEpochs = 10;
miniBatchSize = 20;
numObservations = numel(YTrain);
numIterationsPerEpoch = floor(numObservations./miniBatchSize);
averageSqGrad1=[];
averageSqGrad2=[];
averageSqGrad3=[];
averageGrad1=[];
averageGrad2=[];
averageGrad3=[];
epsilon=0.001;
%learnRate = 0.001;
GradDecay=0.9;
sqGradDecay= 0.9;
executionEnvironment = "auto";
%Prepare for plotting training process
%Initialize the training progress plot.
figure
lineLossTrain = animatedline;
xlabel("Iteration")
ylabel("Loss")
Train Model
iteration = 0;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Loop over mini-batches.
for i = 1:numIterationsPerEpoch
iteration = iteration + 1;
% Read mini-batch of data and convert the labels to dummy
% variables.
idx = (i-1)*miniBatchSize+1:i*miniBatchSize;
X1 = XTrain1(:,:,:,idx);
X2 = XTrain2(:,:,:,idx);
%X3 = XTrain3(:,:,:,idx);
% convert the label into one-hot vector to calculate the loss
Y = zeros(numClasses, miniBatchSize, 'single');
for c = 1:numClasses
Y(c,YTrain(idx)==classes(c)) = 1;
end
% Convert mini-batch of data to dlarray.
dlX1 = dlarray(single(X1),'SSCB');
dlX2 = dlarray(single(X2),'SSCB');
%dlX3 = dlarray(single(X3),'SSCB');
% If training on a GPU, then convert data to gpuArray.
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
dlX1 = gpuArray(dlX1);
dlX2 = gpuArray(dlX2);
%dlX3 = gpuArray(dlX3);
end
%the traning loss and the gradients after the backpropagation were
%calculated using the helper function modelGradients_demo
[gradients1,gradients2,gradients3,loss] = dlfeval(@modelGradients,dlnet1,dlnet2,dlnet3,dlX1,dlX2,dlarray(Y));
learnRate = initialLearnRate/(1 + decay*iteration);
% Update the network parameters using the SGDM optimizer.
% Update the parameters in dlnet1 to 3 sequentially
[dlnet3.Learnables, velocity3] = sgdmupdate(dlnet3.Learnables, gradients3, velocity3, learnRate, momentum);
[dlnet2.Learnables, velocity2] = sgdmupdate(dlnet2.Learnables, gradients2, velocity2, learnRate, momentum);
[dlnet1.Learnables, velocity1] = sgdmupdate(dlnet1.Learnables, gradients1, velocity1, learnRate, momentum);
% Display the training progress.
D = duration(0,0,toc(start),'Format','hh:mm:ss');
addpoints(lineLossTrain,iteration,double(gather(extractdata(loss))))
title("Epoch: " + epoch + ", Elapsed: " + string(D))
drawnow
end
end
%Test Model
%Test the classification accuracy of the model by comparing the predictions on a test set with the true labels.
imagefolder2 = 'C:\Users\manjurama\Desktop\study\PG\PROJECT\finger vein database - Copy\Testing';
imds2 = imageDatastore(imagefolder2, ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[XTest, YTest] = imds2array(imds2);
XTest1=XTest(:,:,:,1:150); % extract the upper part
XTest2=XTest(:,:,:,151:300);% extract the down part
classes2 = categories(YTest);% retrieve the class information with the type of categorical
numClasses2 = numel(classes2);
Convert the data to a dlarray object with dimension format 'SSCB'. For GPU prediction, also convert the data to gpuArray.
dlXTest1 = dlarray(XTest1,'SSCB');
dlXTest2 = dlarray(XTest2,'SSCB');
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
dlXTest1 = gpuArray(dlXTest1);
dlXTest2 = gpuArray(dlXTest2);
end
dlYPred1 = forward(dlnet1,dlXTest1);
dlYPred2 = forward(dlnet2,dlXTest2);
dlX_concat=[dlYPred1;dlYPred2];
dlX_concat=reshape(dlX_concat,[1 numHiddenDimension*2, 1]);
dlX_concat=dlarray(single(dlX_concat),'SSCB');
To classify images using a dlnetwork object, use the predict function and find the classes with the highest scores.
dlYPred = predict(dlnet3,dlX_concat); % you can also use the function forward and softmax to predict
[~,idx] = max(extractdata(dlYPred),[],1);
YPred = classes(idx);
Evaluate the classification accuracy.
accuracy = mean(YPred==YTest)
function [X, T] = imds2array(imds)
imagesCellArray = imds.readall();
numImages = numel( imagesCellArray );
[h, w, c] = size( imagesCellArray{1} );
X = zeros( h, w, c, numImages );
for i=1:numImages
X(:,:,:,i) = im2double( imagesCellArray{i} );
end
T = imds.Labels;
end
function dlnet=createLayer(~,numHiddenDimension)
layers = [
imageInputLayer([240 320 3],"Name","imageinput","Normalization","none")
convolution2dLayer([3 3],8,"Name","conv_1","Padding","same")
batchNormalizationLayer("Name","batchnorm_1")
reluLayer("Name","relu_1")
maxPooling2dLayer([2 2],"Name","maxpool_1","Stride",[2 2])
convolution2dLayer([3 3],16,"Name","conv_2","Padding","same")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
maxPooling2dLayer([2 2],"Name","maxpool_2","Stride",[2 2])
convolution2dLayer([3 3],32,"Name","conv_3","Padding","same")
batchNormalizationLayer("Name","batchnorm_3")
reluLayer("Name","relu_3")
fullyConnectedLayer(numHiddenDimension,"Name","fc")];
lgraph = layerGraph(layers);
dlnet = dlnetwork(lgraph);
end
function dlnet=createLayerFullyConnect(numHiddenDimension)
layers = [
imageInputLayer([1 numHiddenDimension*2 1],"Name","imageinput","Normalization","none")
%fullyConnectedLayer(100,"Name","fc_1")
fullyConnectedLayer(50,"Name","fc_2")];
lgraph = layerGraph(layers);
dlnet = dlnetwork(lgraph);
end
function [gradients1,gradients2,gradients3, loss] = modelGradients(dlnet1,dlnet2,dlnet3,dlX1,dlX2,Y)
dlYPred1 = forward(dlnet1,dlX1);
dlYPred2 = forward(dlnet2,dlX2);
dlX_concat=[dlYPred1;dlYPred2];
dlX_concat=reshape(dlX_concat,[1 50, 1, 20]);%the value 128 corresponds the mini batch size
dlX_concat=dlarray(single(dlX_concat),'SSCB');
dlY_concat=forward(dlnet3,dlX_concat);
dlYPred_concat = softmax(dlY_concat);
loss = crossentropy(dlYPred_concat,Y);
[gradients1,gradients2,gradients3] = dlgradient(loss,dlnet1.Learnables,dlnet2.Learnables,dlnet3.Learnables);
end
it shows the following error
Regards,
Rama senthil

답변 (0개)

카테고리

Help CenterFile Exchange에서 Operations에 대해 자세히 알아보기

제품


릴리스

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by