simulink neural network producing different outputs to workspace

조회 수: 7 (최근 30일)
william edeg
william edeg 2020년 1월 10일
댓글: zhu a 대략 5시간 전
I have trained a network and when i test it with plotresponse I get the graph in plotresponse below, but when i create a simulink block of this network and test with the same input i get the graph in the scope.png file below. (yellow is target). I though it was a problem with normalisation, but now i don't know what could be causing it.
thanks in advance.

채택된 답변

william edeg
william edeg 2020년 2월 12일
If anyone has the same problem and finds this then pay attention to the number of data points you are using for training.
I was using "to workspace blocks" with sample times of 0.001 to collect my training data, but they didn't collect at anything near the proper times, or time intervals. (intervals of 0.001 for 200s should obviously produce 200000 data poitnts, but i was collecting someting like 66667).
I switched to using scope blocks to collect my data instead, and now I have the correct data, and my gensim network responds identically to the network it was gensimed from (when using identical inputs).

추가 답변 (3개)

Nima SALIMI
Nima SALIMI 2020년 1월 25일
I assume that when you are using the simulink block you are training a new network from the scratch. any time you train a network the results would be different due to the random initializations of the weights and bias values (and/or different splitting of the train and test datasets) though using the same datasets and hyperparameters. For this reason, a good pracrice is to train and test the model (either using simulink or toolbox functions) for a number of times to have a more convincing decision about performance of your model.
  댓글 수: 3
Nima SALIMI
Nima SALIMI 2020년 2월 3일
My short answer to your question: nothing is wrong to get different results by one time using simulink and another time not-using simulink (even the same network and same dataset)!
My long answer: as I said in my previous answer, this is a normal behavior of any neural network that you will get different results any time training and testing the same network using the same datasets. Even if you use only command-line and not simulink block, using the same network and exact the same dataset you should get n different results training and testing the model for n different times (so its absolutely normal and nothing is wrong!). for further reading to know the reason of this behaviour: https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
So what I suggest you is:
  1. to get exactly the same results each time training and testing the model use the rng() function (e.g. rng(2)) for the sake of reproducibly of the results and see that you can get the same results :)
  2. but as I said, when you want to decide to choose between several models (lets say 2 models), you should run both models for several times (+30, lets say 40 times) . In this way, you will have 40 accuracy values for each model/network. Then you should take the mean and std of those 40 values of two models to pick the better one (even a better way is to apply some statistical test of significance on those accuracy values for the model selection).
I hope I could anwer your question in more details in this comment.
Thanks for formally accepting my answer.
Best,
Nima
william edeg
william edeg 2020년 2월 3일
Oh, I see. Thanks for your help.

댓글을 달려면 로그인하십시오.


Greg Heath
Greg Heath 2020년 1월 25일
A simpler solution is to ALWAYS begin the program with a resetting of the random number generator. For example, choose your favorite NONNEGATIVE INTEGER as a seed and begin your program with
rng(seed)
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 1
william edeg
william edeg 2020년 1월 26일
편집: william edeg 2020년 1월 26일
Thanks for the response. I think I misunderstood what you meant for a moment. do you mean the random number generator for the initial network weights? the simulink network was created using the gensim function so i think it should be identical to the workspace network. the inputs are also identical, which is why i'm confused about getting different responses.

댓글을 달려면 로그인하십시오.


Nima SALIMI
Nima SALIMI 2020년 1월 25일
From machine learning perspective its a better practice to train the model several times and compare the results accordingly (than fixing the random seed) as we are interested in making the effect of randomness as negligible as possible. The solution I proposed can also be found in the MATLAB documentation (https://au.mathworks.com/help/deeplearning/gs/classify-patterns-with-a-neural-network.html, 2nd last parag).
Any way, but if your time is so limited and you want to check the effect of some variables on the model performamce (depends on your problem in hand) then you can just fix the seed!
  댓글 수: 4
william edeg
william edeg 2020년 1월 26일
Thanks again for your response. I think i might not have explained my problem well sorry. It seems like you and greg have read my problem as being a different reponse from different networks, but the simulink net was made using the gensim function, so it should be identical to the other network i'm comparing it to.
I've successfully trained networks on simpler narx functions and used gensim to create simulink networks that respond identically to their workspace versions, but for some reason it's not working for this more complex function.
zhu a
zhu a 대략 5시간 전
Hello, may I ask if the issue has been resolved? I have encountered a similar problem. The simulation results of Elman neural network using MATLAB script are inconsistent with the simulation results of the Gensim exported model.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

태그

제품


릴리스

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by