Problem with CNN architecture for small images of size 6x6
    조회 수: 6 (최근 30일)
  
       이전 댓글 표시
    
I'm trying to develop a CNN classifier for a large dataset of small images of size 6x6. I have modified the original code of an example (Train a Convolutional Neural Network Using Data in ImageDatastore) from MATLAB Help. I need to increase number of layers to the network more deep to get better results.
Original Code of CNN layers:
layers = [imageInputLayer([28 28 1]);
          convolution2dLayer(5,20);
          reluLayer();
          maxPooling2dLayer(2,'Stride',2);
          fullyConnectedLayer(5);
          softmaxLayer();
          classificationLayer()];
Modified Code of CNN layers:
% Define the convolutional neural network architecture. 
  layers = [imageInputLayer([6 6 1]);
            convolution2dLayer(5,20,'Padding',3);
            reluLayer();
            maxPooling2dLayer(2,'Stride',2);
            convolution2dLayer(5,20);
            reluLayer();
            maxPooling2dLayer(5,'Stride',2);
            fullyConnectedLayer(5);
            softmaxLayer();
            classificationLayer()];
Now I am getting this error:
Error using nnet.cnn.layer.Layer>iInferSize (line 261)
Layer 7 is expected to have a different size.
It is expecting the pooling layer to have different size. I have tried 2x2 and 3x3 sizes of pooling layer but it gives the same errors. Please help me fix this issue so that I can add more number of layers to the CNN.
댓글 수: 0
답변 (1개)
  Javier Pinzón
      
 2017년 6월 1일
        
      편집: Javier Pinzón
      
 2017년 6월 1일
  
      Hello Muhammad,
First of all, you have errors when calculating the output volumen of each layer. Lets check:
Convolution 1:
 OutV1 = (6 - 5 + 3*2)/1 + 1 = 8
Maxpooling:
 OutV2 = 8 / 2 = 4
Convolution 2:
 OutV3 = (4 - 5)/1 + 1 = 0
And then... you dont have any output volume from convolution 2 ownwards... so you need to recalculate your filter sizes... I really recommend you to use sizes of 2 or 3 in the convolutions, and also add some padding of the size of the filter - 1 in each convolution layer, to keep a considerably volumen, i.e., if you use a filter of size of 3, use "padding" = 1, so you will have:
 Out volume = (In - 3 + 2*1)/1 + 1 = In
Remember:
 Output Volume = [("Input Volumen" - "Filter Size" + 2 * "Padding")/"Stride"] + 1
hope it helps if it is not too late =)
댓글 수: 2
  Javier Pinzón
      
 2017년 10월 25일
				+1 is related to the "Bias", each layer has the activation neuron, for that reason there is a +1 in the formula
참고 항목
카테고리
				Help Center 및 File Exchange에서 Recognition, Object Detection, and Semantic Segmentation에 대해 자세히 알아보기
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!