필터 지우기
필터 지우기

Why deep learning code does not work well?

조회 수: 2 (최근 30일)
wolss
wolss 2019년 7월 4일
댓글: wolss 2019년 7월 4일
Hi, I have trained, validated and tested my neural network with nprtool, using trainscg and croos-entropy.
The inputs are all in a single matrix and even the targets.
My problem occurs when I give the net more than 11264 columns as input and target (in my case I add 1024 columns every time, step by step), because the confusion matrix and the ROC curve give low performances. In fact, when I give until 10240 columns as input and target, the net has a precision of 98/99% at most but when the dimension increases, the precision drops to 91%....
I don't know how, sincerly... Con you help me?

채택된 답변

Aiswarya Subramanian
Aiswarya Subramanian 2019년 7월 4일
Can you explain the structure of input matrix once again? What does "in my case I add 1024 columns every time, step by step" mean?
Also, I am understanding that by 'columns', you mean features. If then, it is possible that performance decreases by increasing the number of input features when there is high variance in your model. If your model is overfit to the training data, it’s possible you’ve used too many features and reducing the number of inputs will make the model more flexible to test or future datasets.
  댓글 수: 1
wolss
wolss 2019년 7월 4일
I give as input signals (in frequency domain) that are matrices 1024x512 (then I transpose them). I reunite them into a single matrix called dataset_fft that is 512x (no. of signals)*1024.
I hope I've explained...

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Detection에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by