how to Train Network on Image and Feature Data with more then one feature input?
조회 수: 6 (최근 30일)
이전 댓글 표시
In this example: openExample('nnet/TrainNetworkOnImageAndFeatureDataExample')
I want to change numFeatures fro 1 to 3. I have added a 3 element vector to X2Train
>> preview(dsTrain)
ans =
1×3 cell array
{28×28 double} {[-42 0.9891 0.5122]} {[3]}
layers = [
imageInputLayer(imageInputSize,'Normalization','none','Name','images')
convolution2dLayer(filterSize,numFilters,'Name','conv')
reluLayer('Name','relu')
fullyConnectedLayer(50,'Name','fc1')
concatenationLayer(1,3,'Name','concat')
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')];
lgraph = layerGraph(layers);
featInput = featureInputLayer(numFeatures,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
lgraph = connectLayers(lgraph,"features","cat/in3");
figure
plot(lgraph)
when I run it I keep getting this error:
Error using trainNetwork
Input datastore returned more than one observation per row for network input 2.
Any help would be appreciated!
댓글 수: 0
채택된 답변
Ben
2022년 7월 20일
The subtle issue here is that the feature data needs to read out of the datastore as a NumFeatures x 1 vector as documented here: https://www.mathworks.com/help/deeplearning/ug/datastores-for-deep-learning.html
So you'll need to transpose your feature data either before it goes into the datastore, or as a transform of your existing datastore (e.g. transformedDsTrain = transform(dsTrain,@(x) [x(1),{x{2}.'},x(3)]);).
However you'll next run into another subtle issue at the concatenationLayer since the output of layer 'fc2' will have size 1(S) x 1(S) x 50(C) x BatchSize(B). This needs squeezing so it can be concatenated with the feature data in shape 3(C) x BatchSize(B). Probably the easiest way to do that is with a functionLayer. Here's some code to get your network running:
imageInputSize = [28,28,1];
filterSize = 3;
numFilters = 8;
numClasses = 10;
numFeatures = 3;
layers = [
imageInputLayer(imageInputSize,'Normalization','none','Name','images')
convolution2dLayer(filterSize,numFilters,'Name','conv')
reluLayer('Name','relu')
fullyConnectedLayer(50,'Name','fc1')
squeezeLayer()
concatenationLayer(1,3,'Name','cat')
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')
classificationLayer];
lgraph = layerGraph(layers);
featInput = featureInputLayer(numFeatures,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
lgraph = connectLayers(lgraph,"features","cat/in3");
numObservations = 100;
fakeImages = randn([imageInputSize,numObservations]);
imagesDS = arrayDatastore(fakeImages,IterationDimension=4);
fakeFeatures = randn([numObservations,numFeatures]);
featureDS = arrayDatastore(fakeFeatures.',IterationDimension=2);
fakeTargets = categorical(mod(1:numObservations,numClasses));
targetDS = arrayDatastore(fakeTargets,IterationDimension=2);
ds = combine(imagesDS,featureDS,targetDS);
opts = trainingOptions("adam","MaxEpochs",1);
trainNetwork(ds,lgraph,opts);
function layer = squeezeLayer(args)
arguments
args.Name='';
end
layer = functionLayer(@squeezeLayerFcn,"Name",args.Name,"Formattable",true);
end
function x = squeezeLayerFcn(x)
x = squeeze(x);
% Since squeeze will squeeze out some dimensions, we need to relabel x.
% Assumption: x does not have a 'T' dimension.
n = ndims(x);
newdims = [repelem('S',n-2),'CB'];
x = dlarray(x,newdims);
end
As a final note - I notice you're concatenating the feature input layer to itself, alongside the outputs of layer 'fc2'. Maybe that's intentional, it seemed slightly curious to me.
댓글 수: 4
Ben
2023년 10월 13일
The error is suggestive that the issue is with the datastore setup - trainNetwork thinks that your responses/targets have size 1569, but that's actually the batch/observation dimension.
You can find documentation on datastore inputs for trainNetwork here: https://www.mathworks.com/help/deeplearning/ug/datastores-for-deep-learning.html
If you could call:
data = read(All_TrainDs)
and post information about data, we might be able to debug - in particular we want to check the size of each of the inputs/predictors and the output/response in data.
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!