Hi . I am new to DNN. I use deep neural network for binary classification but returns all zeros or ones.

조회 수: 13 (최근 30일)
I've tried using machine learning approach(SVM, KNN, Tree...) and the accuracy is good.
I am interested in transfer learning so I want to build a deep learning model.
The attached picture is my training data, column 1 to 9 are features and the marked column(10) is the response which will be changed into categorical vector for training.
And here's my code for network training.
And my data for training is attached ༼ •̀ ں •́ ༽ Thanks
layers = [
sequenceInputLayer(9,"Name","sequence")
fullyConnectedLayer(12,"Name","fc_1")
reluLayer("Name","relu_1")
fullyConnectedLayer(96,"Name","fc_3")
reluLayer("Name","relu_2")
fullyConnectedLayer(48,"Name","fc_4")
reluLayer("Name","relu_3")
fullyConnectedLayer(2,"Name","fc_2")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
options = trainingOptions('adam', ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise',...
'MaxEpochs',30, ...
'ValidationData',{xtest,ytest}, ...
'ValidationFrequency',3, ...
'MiniBatchSize',1024, ...
'Verbose',1, ...
'Plots','training-progress');
  댓글 수: 4
Tommy Bear
Tommy Bear 2020년 2월 21일
Yeah the accuracy is bad since the responses are all the same(ones) for arbitrary predictors.

댓글을 달려면 로그인하십시오.

채택된 답변

Daniel Vieira
Daniel Vieira 2020년 2월 21일
I recommend normalizing your predictors, they range from 10^-5 to 10^9, pretty insane. I'd rather work with the log10 of that (ranging from -5 to 9).
I would also change the sequence input layer to an imageInputLayer with size [1 9], and possibly adapt the way you give the input data. The sequence input layer is meant for the LSTM model, not your case. The imageInputLayer, although meant for images, works just fine with vectors and matrices of any sort.
  댓글 수: 3
Tommy Bear
Tommy Bear 2020년 2월 25일
Oh seems my validation data was wrong dimension and I fixed it.
But the accuracy won't improve as long as reach around 80%, any suggestion for improving results?
Daniel Vieira
Daniel Vieira 2020년 2월 26일
it may be because your data is unbalanced, like Srivardhan Gadila said below. When it's too unbalanced, the network will nearly always answer in favor of the most frequent label, effectively "capping" accuracy, and mistaking all the less frequent labels. You should select equal (or nearly equal) ammounts of observations for each label. It should improve your accuracy.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Srivardhan Gadila
Srivardhan Gadila 2020년 2월 25일
Seems that your dataset is unbalanced, count of sequences with label 0 is 59695 and with label 1 is 94226. This could make the learning of the network biased to label 1. Please refer to Prepare and Preprocess Data & Deep Learning Tips and Tricks for more information.
For normalization of data you can make use of the of 'Normalization' & 'NormalizationDimension' Name-Value pair arguments of the sequenceInputLayer or imageInputLayer.

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by