- Since the layer weights are initialized randomly by default (although following a uniform distribution), this way every time the starting point of training loss will be different and at the end you can get different weights for different initialization. One way to freeze the initialization weight is to initialize each layer weight with a random weight matrix of corresponding size and fix the seed of the random number generator.
- Also the trainingOptions function has a Name Value pair ‘Shuffle’ which is set to ‘once’ by default so for every time a new training instance happens the order in which training data batches will process will also change the result slightly. You may set this property to ‘never’ to get consistent result on different training instances.
Each time getting different prediction results using trainNetwork.
조회 수: 3 (최근 30일)
이전 댓글 표시
clc;
datafolder='C:\Users\DataSet\';
x_train=xlsread(strcat(datafolder,'train_data.xlsx'),'B2:BFN5878');
x_test=xlsread(strcat(datafolder,'test_data.xlsx'),'B2:BFN5878');
y_train=xlsread(strcat(datafolder,'train_scores.csv'),'B2:F5878');
x_train_reshape = reshape(x_train',[39 39 1 5877]);
x_test_reshape = reshape(x_test',[39 39 1 5877]);
y_train_var1 = y_train(:,1);
x_train_reshape = reshape(x_train',[39 39 1 5877]);
x_test_reshape = reshape(x_test',[39 39 1 5877]);
y_train_age = y_train(:, 1);
firstConvLayerFiltNum = 20;
layers = [...
imageInputLayer([39 39 1])
convolution2dLayer(5, 20)
reluLayer
maxPooling2dLayer(2, 'Stride', 2)
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm', 'Plots', 'training-progress', ...
'Momentum', 0.9, ...
'InitialLearnRate', 0.001, ... %0.001
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 1, ... %0.1
'LearnRateDropPeriod', 1, ... %8
'L2Regularization', 0.004, ...
'MaxEpochs', 1, ... %40
'MiniBatchSize', 10, ...
'Verbose', true);
net = trainNetwork(x_train_reshape, y_train_var1, layers, options);
YPred = predict(net, x_test_reshape);
xlswrite('C:\Users\DataSet\prediction_files\predict_var1.csv', YPred);
댓글 수: 0
채택된 답변
Raunak Gupta
2020년 8월 7일
Hi,
I assume by getting different prediction you mean the same input is giving different output when training the network several times.
This can happen because of two things:
Since there is some randomness involved in steps of training, it is advice to train the model until the loss doesn’t change for significant number of epochs so that it can be concluded that training is finished.
댓글 수: 0
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!