Why neural network gives negative output ?

조회 수: 2 (최근 30일)
Harsha M V
Harsha M V 2019년 3월 31일
댓글: Greg Heath 2019년 4월 4일
I have 15000 dataset, 6 inputs and 12 outputs. Using feedforward net, I get training, validation, test and over all regression above 95%.
But when I check trained net with new inputs, I get negative values in the outputs.
(There is no negative values in the dataset)
What is the reason for it?
What could be the worng?
What should I do to overcome this issue?

채택된 답변

Greg Heath
Greg Heath 2019년 4월 1일
How different is the new data (e.g., Mahalanobis distance)?
If you know the true outputs, how do the error rates compare?
If you want positive outputs, use a sigmoid in the output layer.
Hope this helps.
*Thank you for formally accepting my answer*
Greg
  댓글 수: 4
Harsha M V
Harsha M V 2019년 4월 4일
Yes, the mahal distance is 6.5
Greg Heath
Greg Heath 2019년 4월 4일
It is not uncommon for new data to lie outside the bounds of training data.
Take into account whether negative values have meaning.
If not, use sigmoids in the output layer.
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by