Do Multiple Output Neural Networks share the same weights and biases?

조회 수: 2 (최근 30일)
Hello, I think it may be a stupid question but training a multiple output NN in matlab shares the weights and biases for all of them or creates internally one NN for each Output?
I mean training one NN for each component of the ouptut (like netx and nety below) is the same as training only one NN for both the components (like netb)?
Context: total_force is a vector of 20000 samples x 2 directions,
total_emg is a vector of 8 features x 20000 samples
...
netx = fitnet(i); %MLN for x (the first component of total_force)
nety = fitnet(i); %MLN for y
netb = fitnet(i); %MLN for both x and y (i.e. the whole vector total_force)
[netx xtr] = train(netx,total_emg,total_force(:,1)');
% I set the same division train/test/val for the other 2 MLN
nety.divideFcn = 'divideind';
netb.divideFcn = 'divideind';
nety.divideParam.trainInd = xtr.trainInd;
netb.divideParam.trainInd = xtr.trainInd;
nety.divideParam.valInd = xtr.valInd;
netb.divideParam.valInd = xtr.valInd;
nety.divideParam.testInd = xtr.testInd;
netb.divideParam.testInd = xtr.testInd;
[nety ytr] = train(nety,total_emg,total_force(:,2)');
[netb btr] = train(netb,total_emg,total_force');
...
running the code beow gives different performances, so I think that the single net for both x and y is trained sharing biases and weights(and is the one that usually performs worst), can someone tell me if I'm right?
x = netx(total_emg(:, xtr.testInd));
y = nety(total_emg(:, xtr.testInd));
x_y = netb(total_emg(:, xtr.testInd));
% R
t_1 = corrcoef(total_force(xtr.testInd,:), x_y');
t_2 = corrcoef(total_force(xtr.testInd,:), [x', y']);
% MSE
MSE_1net= immse(total_force(xtr.testInd,:), x_y');
MSE_2net = immse(total_force(xtr.testInd,:), [x', y']);

채택된 답변

Greg Heath
Greg Heath 2019년 1월 26일
The similarity or orthogonality of the outputs tends to be irrelevant.
If two set of outputs are not caused by a significant number of shared inputs, then it makes no sense to use the same net.
Thank you for formally accepting my answer
Greg
  댓글 수: 3
Greg Heath
Greg Heath 2019년 1월 31일
The former. The net does not separate contributions to each output.
In addition, the smaller the hidden layer , the more stable the output with respect to random perturbations in the input.
Hope this helps.
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by