Unrecognized method, property, or field 'Min' for class 'nnet.cnn.​layer.Imag​eInputLaye​r'.

조회 수: 9 (최근 30일)
clear;clc;close all
% Load the Image Dataset of Normal and Malignant WBC
%imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
%img = readimage(imds,1);
%size(img)
%%labelCount = countEachLabel(imds);
%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%%
net = inceptionv3;
inputSizeNet = net.Layers(1).InputSize;
%Convert the network to a dlnetwork object for feature extraction and remove the last four layers, leaving the "mixed10" layer as the last layer.
lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,["avg_pool" "predictions" "predictions_softmax" "ClassificationLayer_predictions"]);
%View the input layer of the network. The Inception-v3 network uses symmetric-rescale normalization with a minimum value of 0 and a maximum value of 255.
lgraph.Layers(1)
%net1 = net;
%%
%Custom training does not support this normalization, so you must disable normalization in the network and perform the normalization in the custom training loop instead. Save the minimum and maximum values as doubles in variables named inputMin and inputMax, respectively, and replace the input layer with an image input layer without normalization.
inputMin = double(lgraph.Layers(1).Min);
inputMax = double(lgraph.Layers(1).Max);
layer = imageInputLayer(inputSizeNet,'Normalization','none','Name','input');
lgraph = replaceLayer(lgraph,'input_1',layer);
%Determine the output size of the network. Use the analyzeNetwork function to see the activation sizes of the last layer. To analyze the network for custom training loop workflows, set the TargetUsage option to 'dlnetwork'.
analyzeNetwork(lgraph,'TargetUsage','dlnetwork')
%Create a variable named outputSizeNet containing the network output size.
outputSizeNet = [8 8 2048];
%Convert the layer graph to a dlnetwork object and view the output layer. The output layer is the "mixed10" layer of the Inception-v3 network.
dlnet = dlnetwork(lgraph);
%Load the Image Dataset of Normal and Malignant WBC
imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
labelCount = countEachLabel(imds);
%Partition the data into training and validation sets. Hold out 5% of the observations for testing.
cvp = cvpartition(numel(imds),'HoldOut',0.05);
idxTrain = training(cvp);
idxTest = test(cvp);
annotationsTrain = imds(idxTrain);
annotationsTest = imds(idxTest);
%Create an augmented image datastore containing the images corresponding to the captions. Set the output size to match the input size of the convolutional network. To keep the images synchronized with the captions, specify a table of file names for the datastore by reconstructing the file names using the image ID. To return grayscale images as 3-channel RGB images, set the 'ColorPreprocessing' option to 'gray2rgb'.
tblFilenames = table(cat(1,annotationsTrain.Filename));
augimdsTrain = augmentedImageDatastore(inputSizeNet,tblFilenames,'ColorPreprocessing','gray2rgb')
%%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%Select the Test images and save in Y_test
Unrecognized method, property, or field 'Min' for class 'nnet.cnn.layer.ImageInputLayer'.
Error in cnnv3 (line 23)
inputMin = double(lgraph.Layers(1).Min);

채택된 답변

yanqi liu
yanqi liu 2021년 11월 12일
sir,the code for lgraph.Layers(1).Min or lgraph.Layers(1).Max is confuse,may be use the follow
clear;clc;close all
% Load the Image Dataset of Normal and Malignant WBC
%imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
%img = readimage(imds,1);
%size(img)
%%labelCount = countEachLabel(imds);
%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%%
net = inceptionv3;
inputSizeNet = net.Layers(1).InputSize;
%Convert the network to a dlnetwork object for feature extraction and remove the last four layers, leaving the "mixed10" layer as the last layer.
lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,["avg_pool" "predictions" "predictions_softmax" "ClassificationLayer_predictions"]);
%View the input layer of the network. The Inception-v3 network uses symmetric-rescale normalization with a minimum value of 0 and a maximum value of 255.
lgraph.Layers(1)
%net1 = net;
%%
%Custom training does not support this normalization, so you must disable normalization in the network and perform the normalization in the custom training loop instead. Save the minimum and maximum values as doubles in variables named inputMin and inputMax, respectively, and replace the input layer with an image input layer without normalization.
% inputMin = double(lgraph.Layers(1).Min);
% inputMax = double(lgraph.Layers(1).Max);
layer = imageInputLayer(inputSizeNet,'Normalization','none','Name','input');
lgraph = replaceLayer(lgraph,'input_1',layer);
%Determine the output size of the network. Use the analyzeNetwork function to see the activation sizes of the last layer. To analyze the network for custom training loop workflows, set the TargetUsage option to 'dlnetwork'.
%analyzeNetwork(lgraph,'TargetUsage','dlnetwork')
%Create a variable named outputSizeNet containing the network output size.
outputSizeNet = [8 8 2048];
%Convert the layer graph to a dlnetwork object and view the output layer. The output layer is the "mixed10" layer of the Inception-v3 network.
dlnet = dlnetwork(lgraph);
%Load the Image Dataset of Normal and Malignant WBC
% imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
imds = imageDatastore(fullfile(matlabroot,'toolbox','matlab'),...
'IncludeSubfolders',true,'FileExtensions','.tif','LabelSource','foldernames');
labelCount = countEachLabel(imds);
%Partition the data into training and validation sets. Hold out 5% of the observations for testing.
cvp = cvpartition(numel(imds),'HoldOut',0.05);
idxTrain = training(cvp);
idxTest = test(cvp);
annotationsTrain = imds(idxTrain);
annotationsTest = imds(idxTest);
%Create an augmented image datastore containing the images corresponding to the captions. Set the output size to match the input size of the convolutional network. To keep the images synchronized with the captions, specify a table of file names for the datastore by reconstructing the file names using the image ID. To return grayscale images as 3-channel RGB images, set the 'ColorPreprocessing' option to 'gray2rgb'.
tblFilenames = table(cat(1,annotationsTrain.Filename));
augimdsTrain = augmentedImageDatastore(inputSizeNet,tblFilenames,'ColorPreprocessing','gray2rgb')
%%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%Select the Test images and save in Y_test
  댓글 수: 1
sun rise
sun rise 2021년 11월 13일
ans =
ImageInputLayer with properties:
Name: 'input_1'
InputSize: [299 299 3]
Hyperparameters
DataAugmentation: 'none'
Normalization: 'none'
AverageImage: []
Error using internal.stats.cvpartitionInMemoryImpl (line 129)
The number of observations must be a positive integer greater than one.
Error in cvpartition (line 175)
cv.Impl = internal.stats.cvpartitionInMemoryImpl(varargin{:});
Error in cnnv3 (line 40)
cvp = cvpartition(numel(imds),'HoldOut',0.05);

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by