How do I find out the number of neurons in layers?
조회 수: 8 (최근 30일)
이전 댓글 표시
Dear all,
I have this code for training neural network (RBF) :
load('trenovaci_modely1_velky')
disp('Trénovací modely byly načteny.')
P = [velky_tvar{1,:}];
T = [velky_tvar{2,:}];
net = newrb(P,T,0,0.3)
save net
disp('Neuronová síť byla uložena.')
I need to know how many neurons are in layers? Does anyone have any idea? Thank you for your answers.
댓글 수: 0
채택된 답변
Greg Heath
2015년 4월 1일
% newrb( x, t ,MSEgoal, spread, Nbmax, dNdisp )
% x - I x N matrix of N "I"nput vectors.
% t - O x N matrix of N "O"utput target vectors.
% MSEgoal - Mean squared error goal, default = 0.0.
% spread - Spread of radial basis functions, default = 1.0.
% Nbmax - Maximum number of basis neurons, default is N.
% dNdisp - Number of neurons to add between displays, default = 25.
'BUG: Output listing WILL SKIP line for neurons = 1 '
close all, clear all, clc
[ x, t ] = simplefit_dataset;
[ I N ] = size(x) % [ 1 94 ]
[ O N ] = size(t) % [ 1 94 ]
zx = zscore( x',1 )'; % Standardize to zero-mean/unit-variance
zt = zscore( t',1 )';
figure(1)
plot( zx, zt )
hold on
MSEgoal = 0.01*mean(var(zt',1)) % 0.01
spread = 1
Nbmax = N - 2*round(0.15*N) % 66 ~ 0.7*N
dNdisp = 1
[ net tr ] = newrb( zx, zt, MSEgoal, spread, Nbmax, dNdisp);
% NEWRB, neurons = 0, MSE = 1
' 'BUG: SKIPS neurons = 1 OUTPUT LISTING '
% NEWRB, neurons = 2, MSE = 0.272804
% NEWRB, neurons = 3, MSE = 0.267621
% NEWRB, neurons = 4, MSE = 0.181115
% NEWRB, neurons = 5, MSE = 0.0811973
% NEWRB, neurons = 6, MSE = 0.0246589
% NEWRB, neurons = 7, MSE = 0.0115911
% NEWRB, neurons = 8, MSE = 0.00415462
epochs = tr.epoch; MSE= tr.perf;
result = [ epochs' MSE' ]
% result = epoch MSE
% 0 1
% 1 0.29336
% 2 0.2728
% 3 0.26762
% 4 0.18112
% 5 0.081197
% 6 0.024659
% 7 0.011591
% 8 0.0041546
y = net(zx);
figure(1)
hold on
plot( zx, y, 'r.')
e = zt-y;
NMSE = mse(e) %0.004154
Nw = net.numWeightElements % 25
H = (Nw-O)/(I+1+O) % 8
H = size(net.IW{1},1)
H = size(net.b{1},1)
H = size(net.LW{2},2)
Hope this helps
Thank you for formally accepting my answer.
Greg
댓글 수: 0
추가 답변 (1개)
Vinod Sudheesh
2015년 4월 1일
You could do this by querying the "size" property of each of the individual neural network layers. For example, please see the code snippet below:
>> net=feedforwardnet([10 11 12]);
>> net.layers{1}.size
>> net.layers{2}.size
>> net.layers{3}.size
댓글 수: 1
Greg Heath
2015년 4월 1일
Not for NEWRB!
If [ I N] = size(input) [ O N ] = size(target)
the initial pretraining topology is I-O and the final postraining topology will be I-H-O.
However, H is unknown until the training is complete.
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!