How to obtain the relative importance of each input variable for a neural network?

조회 수: 71 (최근 30일)
Hi. I have build a regression neural network with 580 data points of 48 inputs and 5 outputs. The optimum network is 30 neurons for the first hidden layer and 17 neurons for the second hidden layer as shown in the figure below. I would like to know is there any methods to understand the relationship between inputs and outputs for the neural network (whether they have positive or negative significant relationship)? How to obtain the relative importance of each input to each output? Thanks.

채택된 답변

Greg Heath
Greg Heath 2016년 8월 16일
The relative importance of an input variable depends on what other input variables are present.
For example, if you are not planning on removing unimportant variables, then you can rank inputs by the resulting performance when only that variable is replaced by its mean values and the net starts training from that point.
Hope this helps.
Greg
  댓글 수: 3
Greg Heath
Greg Heath 2016년 8월 21일
Yes. However, you would have to start each of the I nets from the same all variable final weight condition.
This is quite different from independently designing I different nets with 1 input fixed.
Since the set of best m inputs does not necessarily contain the set of the best m-1 inputs, I find it a waste of time trying to optimize input ranking.
Typically, I am satisfied to just use the variables chosen from a model that is linear in the variables.
However, on rare equations I have used this method starting with squares and products included as variables.
See
help stepwisefit
doc stepwisefit
Also of interest
help sequentialfs
doc sequentialfs
Hope this is not too confusing.
Greg
Emran Alotaibi
Emran Alotaibi 2021년 9월 13일
편집: Emran Alotaibi 2021년 9월 13일
Hey Greg!
I compeletly agree with you, However, could you provide any scientific paper with this procedure so it can be cited in our work? Regards

댓글을 달려면 로그인하십시오.

추가 답변 (3개)

Yago Veloso
Yago Veloso 2017년 1월 18일
편집: Yago Veloso 2017년 1월 18일
You can use the relative importance method using the equation above, where Rij is the relative importance of the variable xi with respect to the output neuron j, H is the nunber of neurons in the hidden layer, Wik is the synaptic connection weight between the input neuron i and the hidden neuron k, and Wkj is the synaptic weight between the hidden neuron k and the output neuron j.
If it's not very clear for you take a look at this paper here: http://www.palisade.com/downloads/pdf/academic/DTSpaper110915.pdf
Good look! Regards Yago
  댓글 수: 1
F S
F S 2017년 10월 23일
편집: F S 2017년 10월 23일
Hi Yago, this solution you present above applies to a Neural Network with only one Hidden Layer, right? And it doesn't really give the relative importance, in respect to the other inputs, does it? How does it deal with negative weights?

댓글을 달려면 로그인하십시오.


tanfeng
tanfeng 2019년 5월 30일
Same question!

Iman Jafari
Iman Jafari 2021년 9월 8일
편집: Iman Jafari 2021년 9월 11일
The relative importance of input variables on selected outputs can be estimated using following equation and code. Where Ej, W and N are the relative importance of the jth input variable on, weighted coefficient, and the number of neurons, respectively. The subscripts ‘k’, ‘t’ and ‘n’ refer to input, hidden and output neurons, respectively and the superscripts ‘i’, ‘h’ and ‘o’ refer to input, hidden and output layers, respectively.
W1= net.IW{1,1}; % Weights between input and hidden layers
W2= net.LW{2,1}'; % Weights between hidden and output layers
k= size(W1,2); % number of input varables
g=1; % Select which output you are going to evaluate the relative importance of
% input variables on (No need to change if you have only one output)
for n=1:k
for i=1:k
a(:,i)=(abs(W1(:,n))./sum(abs(W1(:,1:i)),2)).*abs(W2(:,g));
end
b=sum(sum(a,2));
I(n,1)=sum(abs(W1(:,n))./sum(abs(W1),2))/b;
end
for i=1:k
R(i,1)=(I(i)/sum(I))*100; % Percentage
end
  댓글 수: 6
moha z
moha z 2023년 9월 28일
Hi,
Please introduce its source, thank you.
Jorge
Jorge 2024년 3월 21일 20:57
Hey,
I'm trying to implement this equation. However, when I retrieve the weights of the input variables with net.IW{1,1}, MATLAB fitnet drops two input variables and I cannot identify which ones. I thought it was removing input varialbes with NaN values, but it is not the case. There are seven input varaibes with some instances of NaN values, but MATLAB keeps removing two variables only. Any idea on how to find which variables are being removed without testing one at a time?
Thank you.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Neural Simulation에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by