The output size of the last layer does not match the response size?

조회 수: 52 (최근 30일)
I'm training a regression layered CNN by taking advantage of the built in Matlab functions 'digitTrain4DArrayData' and 'digitTest4DArrayData', where I have replaced the original folders in the related directory with my own training/validation images and excel files of numerical data. When running I first got the error:
Error using trainNetwork (line 184)
Invalid input data for fully connected layer. The number of channels in
the input data (1) must match the layer's expected input size (32).
I tried to resolve this by changing my fully-connected layer output size to 32, and instead got the following:
Invalid training data. The output size (32) of the last layer does not
match the number of responses (1).
I believe this is to do with convolution filter, padding and pooling layer sizing, and have tried to rectify this using the equation to determine output size:
O = [(W-K+2P)/S)+1
Where O=Output size, W=Input size,K=Kernel/Pool size,P=Padding size and S=Stride length.
I have tried using this equation to match the output size to the number of responses but can't seem to get it right. Could I get some help with this? Secondly, because my initial input image dimensions are unequal (153 X 365 X 3), do I need to utilise padding to make the length and width the same?
My code is below,
Thank you
[XTrain,~,YTrain] = digitTrain4DArrayData;
[XValidation,~,YValidation] = digitTest4DArrayData;
%Loading in training and validation images
layers = [
imageInputLayer([153 365 3])
convolution2dLayer(3,8,'Padding','same') %8 filters of size 3x3 with 'same' padding (also 3x3), no size
batchNormalizationLayer %Normalising each input between [0,1]
reluLayer %Rectified Linear Unit (ReLu), threshold operation to each element of input, where any value less than zero is set to zero
averagePooling2dLayer(2,'Stride',1) %Pool size 2x2 with Stride/step of 1
convolution2dLayer(3,16,'Padding','same')
batchNormalizationLayer
reluLayer
averagePooling2dLayer(2,'Stride',1)
convolution2dLayer(3,32,'Padding','same')
batchNormalizationLayer
reluLayer
convolution2dLayer(3,32 ,'Padding','same')
batchNormalizationLayer
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(32) %Regression problem, FC layer must precede the regression layer at the end. Here at size 1
regressionLayer];
%Creating layer networks: 4 pairs of convoluting and pooling operations.
miniBatchSize = 128;
validationFrequency = floor(numel(YTrain)/miniBatchSize);
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',30, ... %Trained for 30 epochs
'InitialLearnRate',1e-3, ... %Setting inital learn rate to 0.001
'LearnRateSchedule','piecewise', ...
'LearnRateDropFactor',0.1, ...
'LearnRateDropPeriod',20, ... %Lowering the learn rate by 0.1 after 20 epochs
'Shuffle','every-epoch', ...
'ValidationData',{XValidation,YValidation}, ...
'ValidationFrequency',validationFrequency, ...
'Plots','training-progress', ...
'Verbose',false);
%Training the network
net = trainNetwork(XTrain,YTrain,layers,options);
%Creating the network; this command uses a compatible GPU if possible.
YPredicted = predict(net,XValidation);
%Test the performance of the network by evaluating the accuracy on the validation data
predictionError = YValidation - YPredicted;
%Evaluating performance of the model by calculating:
%1. Ther percentage of predictions within an acceptatble error margin
%2. The RMSE of the predicted and actual
thr = 10; %setting threshold to 10
numCorrect = sum(abs(predictionError) < thr);
numValidationImages = numel(YValidation);
%calculating no. predictions within acceptable error margin
accuracy = numCorrect/numValidationImages; %accuracy as a %
squares = predictionError.^2;
rmse = sqrt(mean(squares));
%Calculating RSME
  댓글 수: 1
Saurabh Sharma
Saurabh Sharma 2022년 10월 13일
I am trying to do image to image regression neural network training. I have used combine function to combine input images and response images and allocated the combined data in a folder ds. After allocation, I have used trainNetwork function using ds folder: Syntax: trainNetwork(ds, lgraph, options) It shows error: output size does not match with response size.

댓글을 달려면 로그인하십시오.

채택된 답변

Srivardhan Gadila
Srivardhan Gadila 2021년 3월 29일
The issue is probably due to the format of the responses of the training data and not w.r.t the convolution filter, padding etc. Also you don't have to make the image dimensions equal and it can be as it is, provided all the input Observations of the training data are having the same shape.
Refer to the documentation of the Input Arguments of the trainNetwork function for the syntax: net = trainNetwork(images,responses,layers,options).
The following code might help you:
inputSize = [153 365 3];
numSamples = 128;
numClasses = 4;
responseSize = 32;
%% Generate random data for training the network.
trainData = randn([inputSize numSamples]);
trainLabels = randn([numSamples responseSize]);
%% Create a network.
layers = [
imageInputLayer(inputSize)
convolution2dLayer(3,8,'Padding','same')
reluLayer
averagePooling2dLayer(2,'Stride',1)
convolution2dLayer(3,16,'Padding','same')
batchNormalizationLayer
reluLayer
averagePooling2dLayer(2,'Stride',1)
convolution2dLayer(3,32,'Padding','same')
batchNormalizationLayer
reluLayer
convolution2dLayer(3,32 ,'Padding','same')
batchNormalizationLayer
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(responseSize)
regressionLayer];
analyzeNetwork(layers);
%% Define training options.
options = trainingOptions('adam', ...
'InitialLearnRate',0.005, ...
'LearnRateSchedule','piecewise',...
'MaxEpochs',1, ...
'MiniBatchSize',4, ...
'Verbose',1, ...
'Plots','training-progress',....
'ExecutionEnvironment','cpu');
%% Train the network.
net = trainNetwork(trainData,trainLabels,layers,options);
  댓글 수: 2
Sho Wright
Sho Wright 2021년 4월 6일
Hello,
Makes a lot of sense defining the response size early on! I have since resolved the problem in a similar fashion, by changing the final convolution output size to match FC layer, as so:
layers = [
imageInputLayer([96 201 3]) %28x28x1 pixel image
convolution2dLayer(10,5,'Padding','same') %8 filters of size 3x3 with 'same' padding (also 3x3), no size
batchNormalizationLayer %Normalising each input between [0,1]
reluLayer %Rectified Linear Unit (ReLu), threshold operation to each element of input, where any value less than zero is set to zero
averagePooling2dLayer(2,'Stride',1) %Pool size 2x2 with Stride/step of 2x2
convolution2dLayer(5,5,'Padding','same')
batchNormalizationLayer
reluLayer
averagePooling2dLayer(2,'Stride',1)
convolution2dLayer(5,5,'Padding','same')
batchNormalizationLayer
reluLayer
averagePooling2dLayer(2,'Stride',1)
convolution2dLayer(5,1,'Padding','same')
batchNormalizationLayer
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(1) %Regression problems, FC layer must precede the regression layer at the end. Here at size 1
regressionLayer];
Many thanks for the answer
Saurabh Sharma
Saurabh Sharma 2022년 10월 13일
I am trying to do image to image regression neural network training. Inam using images of 100*100*1 size. I have used combine function to combine input images and response images and allocated the combined data in a folder ds. After allocation, I have used trainNetwork function using ds folder: Syntax: trainNetwork(ds, lgraph, options) It shows error: output size [127 127 1] does not match with response size [1024 642 3].

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by