Neural Network is not reducing loss during trainning
조회 수: 16 (최근 30일)
이전 댓글 표시
Hello, I am working in a project that describes how to estimate SoC using Neural networks. The example runs fine with provided dataset and the output Network does a really great job on predicting SoC. https://www.mathworks.com/help/deeplearning/ug/predict-soc-using-deep-learning.html
However, what I am doing is utilizing external data to train the Net and then observe what are the results after trainning with different data.
The problem is that I am not sure if my data need to be normalized, because the loss value is not reducing during trainning. And because of that, the model won't be able to predict the output accurately.
Attached is the example dataset, first 3 columns are feature inputs and last column is the output
Thanks in advance for your help!
댓글 수: 3
Taylor
2023년 12월 13일
Generally, you want to normalize your data based on statistics from the training set only to avoid overfitting/data leakage (as described in the link you sent).
It is also useful to normalize to zero mean and unit variance to bring all features to a similar scale (assuming you want each feature to contribute equally). Ultimately it depends on what you want out of your layer in terms of how often is activates the next layer, but let's consider the outputs of tanh for values 0-5. Often, the threshold for a tanh activation function is "0". Every value in the range 0-5 would lead to activation of the next layer.
There isn't necessarily an issue with having negtive values fed into your layers. The ReLU activation function will reduce all negative inputs to zero, but other activation functions like tanh and leaky ReLU will preserve some signedness.
There are plently of details beyond this you can get caught up in, but, at the end of the day, my recommendation is to normalize your data. In the example your following, they normalize the data to be zero-centered. That's a great place to start. From there you can try other normalization methods if you'd like.
답변 (1개)
Ganesh
2023년 12월 13일
I understand that you are unable to train your Neural Network model as your model loss does not reduce with training epochs. The following are possible solutions to your problem, which might be able to resolve the issue.
- Normalization: Kindly normalize your data before feeding it to the Neural Network Model. To answer your question, the best normalization would be to keep the values bound between 0 and 1. Normalize the entire dataset, features and labels. In your case, the normalization can happen after splitting test set, as in a real world scenario, SOC decreases, and you will be unable to obtain global minimum value, while testing the network. Please refer to the documentation for understanding how to Normalize data on MATLAB: https://in.mathworks.com/help/matlab/ref/double.normalize.html
- Learning Rate: Sometimes a very high or a very low learning rate can affect the model and not allow the model to learn correctly. I suggest you to modify your learning rate and then training your model to validate it's effect on your model.
- Activation Function: Ensure that you have the right activation functions in place. e.g. do not use "softmaxlayer" as your last activation function as it is used in Classification Problems only. Do play with Activation Functions so that incompatible Activation functions arent being used. Please refer to the docuentation for understanding different activation functions usable in matlab: https://in.mathworks.com/help/deeplearning/ug/list-of-deep-learning-layers.html
- Try to overfit the data, this is not suggested in practice, but can help you in debugging the issue, as you can understand if the problem lies in the data or in the code.
Hope this helps!
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!