How to integrate a trained LSTM neural network to a Simulink model?

Hi, I have trained and tested a LSTM NN on Matlab 2018a, but I`m having problem to find a way to make my trained 'net' to integrate with a Simulink model. I have tried to create a Simulink block using 'gensim(net)' but it doesn`t support LSTM. If anyone found a way around that, I'll appreciate if you could share it. Thank you,

댓글 수: 3

Hi Carlos.
I have utilized your code for my own LSTM network but I'm having some issues implementing into Simulink. Can you show the function file you used to call your script with your code please. Any assistance would be appreciated.
Hi, I have trained and tested a LSTM NN on Matlab but do not know how to implement trained 'net' to integrate with my Simulink model.
anybody know?
You can use the Stateful predict, or Stateful classify to for using a trained LSTM with Simulink
Here are some links:

댓글을 달려면 로그인하십시오.

 채택된 답변

David Willingham
David Willingham 2021년 10월 19일

1 개 추천

You can use the Stateful predict, or Stateful classify to for using a trained LSTM with Simulink
Here are some links:

추가 답변 (3개)

CARLOS VIDAL
CARLOS VIDAL 2018년 4월 10일
편집: CARLOS VIDAL 2018년 5월 24일
The way I found was to write a script, see below, using the LSTM equations and the weights and Bias from my previously trained NN, then create a function on Simulink to call the script with some small adaptations on the script below. It works really fine!
X=X_Test;
HiddenLayersNum=10;
LSTM_R=net.Layers(2,1).RecurrentWeights;
LSTM_W=net.Layers(2,1).InputWeights;
LSTM_b=net.Layers(2,1).Bias;
FullyConnected_Weights=net.Layers(3,1).Weights;
FullyConnected_Bias=net.Layers(3,1).Bias;
W.Wi=LSTM_W(1:HiddenLayersNum,:);
W.Wf=LSTM_W(HiddenLayersNum+1:2*HiddenLayersNum,:);
W.Wg=LSTM_W(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
W.Wo=LSTM_W(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
R.Ri=LSTM_R(1:HiddenLayersNum,:);
R.Rf=LSTM_R(HiddenLayersNum+1:2*HiddenLayersNum,:);
R.Rg=LSTM_R(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
R.Ro=LSTM_R(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
b.bi=LSTM_b(1:HiddenLayersNum,:);
b.bf=LSTM_b(HiddenLayersNum+1:2*HiddenLayersNum,:);
b.bg=LSTM_b(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
b.bo=LSTM_b(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
%LSTM - Layer
h_prev=zeros(HiddenLayersNum,1);%Output gate initial values (t-1)
c_prev=zeros(HiddenLayersNum,1);
i=1;
for i=1:length(X)
%Input Gate
z=W.Wi*X(:,i)+R.Ri*h_prev+b.bi;
I = 1.0 ./ (1.0 + exp(-z));%Input gate
%Forget Gate
f=W.Wf*X(:,i)+R.Rf*h_prev+b.bf;
F = 1.0 ./ (1.0 + exp(-f));%Forget gate
%Layer Input
g=W.Wg*X(:,i)+R.Rg*h_prev+b.bg;%Layer input
G=tanh(g);
%Output Layer
o=W.Wo*X(:,i)+R.Ro*h_prev+b.bo;
O = 1.0 ./ (1.0 + exp(-o));%Output Gate
%Cell State
c=F.*c_prev+I.*G;%Cell Gate
c_prev=c;
% Output (Hidden) State
h=O.*tanh(c);%Output State
h_prev=h;
% Fully Connected Layers
fc=FullyConnected_Weights*h+FullyConnected_Bias;
FC(:,i)=exp(fc)/sum(exp(fc)); %Softmax
end
[M,II] = max(FC);
YYY= categorical(II,[1 2 3 4 5]);%5 features
acc = sum(YYY == YY)./numel(YYY) %YY is the *reference* output data set used to calculate the accuracy of the LSTM when facing an unknown input data (X_test).
figure
plot(YYY,'.-')
hold on
plot(YY)
hold off
if true
% code
end
xlabel("Time Step")
ylabel("Activity")
title("Predicted Activities")
legend(["Predicted" "Test Data"])

댓글 수: 3

Hello Carlos Vidal
I am trying to implement your code for a LSTM net I have trained to be implemented in Simulink. It is similar to the problem shown in https://au.mathworks.com/help/signal/examples/classify-ecg-signals-using-long-short-term-memory-networks.html .
I have used 100 hidden units in the BiLSTM layer. The trained net results in giving me one of three categorical values. I am using 20 specific features for each timestep. (My Xtest data is a 11340 rows of 20 features) Currently I am trying to implement your code in matlab using the independent Xtest data I have from the example.
Firstly the X input to this code is that to be in the form of a row matrix of my feature values or does it need to be in a cell configuration ( as is used in the training of the net). I ask this because I am getting the error when running the code.
"Error using *
Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number
of rows in the second matrix. To perform elementwise multiplication, use '.*'.
Error in Untitled7 (line 28)
z=W.Wi*X(:,i)+R.Ri*h_prev+b.bi; "
I am also not sure what the variable FC is referring to and what its dimensions would be as I need to predefine for future code generation.
I am very new at implementing neural nets and my grasp on the mathmatics youa re using in the code is very limited.I have implemented a shallow NN but it is not giving me the results I need. Where the results of this trained net are much better so I would really love to be able to implement in my simulink model.
Any help I could get would be great and a merry Xmas to you as well.
The code Iam using is
X=XTest;
HiddenLayersNum=100;
LSTM_R=netLTSM.Layers(2,1).RecurrentWeights;
LSTM_W=netLTSM.Layers(2,1).InputWeights;
LSTM_b=netLTSM.Layers(2,1).Bias;
FullyConnected_Weights=netLTSM.Layers(3,1).Weights;
FullyConnected_Bias=netLTSM.Layers(3,1).Bias;
W.Wi=LSTM_W(1:HiddenLayersNum,:);
W.Wf=LSTM_W(HiddenLayersNum+1:2*HiddenLayersNum,:);
W.Wg=LSTM_W(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
W.Wo=LSTM_W(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
R.Ri=LSTM_R(1:HiddenLayersNum,:);
R.Rf=LSTM_R(HiddenLayersNum+1:2*HiddenLayersNum,:);
R.Rg=LSTM_R(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
R.Ro=LSTM_R(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
b.bi=LSTM_b(1:HiddenLayersNum,:);
b.bf=LSTM_b(HiddenLayersNum+1:2*HiddenLayersNum,:);
b.bg=LSTM_b(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
b.bo=LSTM_b(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
%LSTM - Layer
h_prev=zeros(HiddenLayersNum,1);%Output gate initial values (t-1)
c_prev=zeros(HiddenLayersNum,1);
i=1;
for i=1:length(X)
%Input Gate
z=W.Wi*X(:,i)+R.Ri*h_prev+b.bi;
I = 1.0 ./ (1.0 + exp(-z));%Input gate
%Forget Gate
f=W.Wf*X(:,i)+R.Rf*h_prev+b.bf;
F = 1.0 ./ (1.0 + exp(-f));%Forget gate
%Layer Input
g=W.Wg*X(:,i)+R.Rg*h_prev+b.bg;%Layer input
G=tanh(g);
%Output Layer
o=W.Wo*X(:,i)+R.Ro*h_prev+b.bo;
O = 1.0 ./ (1.0 + exp(-o));%Output Gate
%Cell State
c=F.*c_prev+I.*G;%Cell Gate
c_prev=c;
% Output (Hidden) State
h=O.*tanh(c);%Output State
h_prev=h;
% Fully Connected Layers
fc=FullyConnected_Weights*h+FullyConnected_Bias;
FC(:,i)=exp(fc)/sum(exp(fc)); %Softmax
end
[M,II] = max(FC);
YYY= categorical(II,[1 2 3]);%5 features
acc = sum(YYY == YTest)./numel(YYY) %YY is the *reference* output data set used to calculate the accuracy of the LSTM when facing an unknown input data (X_test).
figure
plot(YYY,'.-')
hold on
plot(YY)
hold off
if true
% code
end
xlabel("Time Step")
ylabel("Activity")
title("Predicted Activities")
legend(["Predicted" "Test Data"])
Sorry, for my big delay! I probably missed the notification.
For what I understood you have problem with your matrices dimentions, not matching.
Please, check again you matrices dimensions, if you havent already solved the problem.
Element wise multiplication doesnt change the matrix dimention, which is used in some of the LSTM gates. The gates are there to limit certain data e.g from previous ou current time step.
If you have already solved your problem, please leave here your comments so other people can learn from your experience.
Meeting the same error, just like Carlos said, its matrices dimentions issue. As a new of lstm, X here i think it's a matrix of time_steps*features, rather than the testing dataset you used in validation of this model.

댓글을 달려면 로그인하십시오.

Mudasar Memon
Mudasar Memon 2018년 5월 22일

0 개 추천

What is YY? It is undefined.

댓글 수: 7

Hi Mudasar!
Sorry, YY is the data I have used as reference to measure the accuracy of my code, it was not generated by the script. Remember this code is just the forward pass of the LSTM after I have trained it using the Matlab tool box. I took the weights/bias matrices after performing training, then used in this script and the YY just to make sure is running properly when comparing to the tool box result. Hope it helped you.
Dear Carlos Vidal, Thanks for your kind response, sorry I am a bit new to LSTM, I was trying out LSTM using this https://www.mathworks.com/help/nnet/examples/time-series-forecasting-using-deep-learning.html What I perceive the YY in the code you suggested is equal to YTest in the above-referred example. Please rectify me, if I am wrong. (This is my email: mudaserlateef{at}gmail.com, if you want to share something).
Hi Mudasar, The YY is the output of my test/validation data set (X_Test), not used previously during training phase of the LSTM. So it seems to be equivalent to the YTest of the Matlab example you mentioned.
YYY is the output of my trained LSTM (equivalent of YPred on the matlab example you gave) when I use the X_Test data set as input of the trained LSTM. It's then possible to calculate the accuracy of the LSTM script when facing a specific unknown data set (only a feedforward pass).
Hope it's clear!
Mudasar Memon
Mudasar Memon 2018년 5월 24일
편집: Mudasar Memon 2018년 5월 24일
Thanks a lot, CARLOS VIDAL.
It is really useful. Can I get some explanation of this portion of your code? in the context of the example, I shared earlier.
[M,II] = max(FC); YYY= categorical(II,[1 2 3 4 5]);%5 features
[M,II] = max(FC); it gets the max value of the FC!
YYY= categorical(II,[1 2 3 4 5]); creates values to a category (https://www.mathworks.com/help/matlab/ref/categorical.html)
In my case I used the LSTM to solve a logistic regression problem, that is why I used the "categories" to be selected.
In your case I guess you are using the LSTM to solve a linear regression, which is unlikely you need this type of classification (category data), as well the softmax layer which provide the probability for each category (then I used the [M,II] = max(FC)) to select only the most probable category!
Thank you very much, Dear Carlos Vidal.
why the demension of YYY is (length(X),1)?

댓글을 달려면 로그인하십시오.

tarkhani rakia
tarkhani rakia 2024년 11월 25일

0 개 추천

The way I found was to write a script, see below, using the LSTM equations and the weights and Bias from my previously trained NN, then create a function on Simulink to call the script with some small adaptations on the script below. It works really fine!
X=X_Test;
HiddenLayersNum=10;
LSTM_R=net.Layers(2,1).RecurrentWeights;
LSTM_W=net.Layers(2,1).InputWeights;
LSTM_b=net.Layers(2,1).Bias;
FullyConnected_Weights=net.Layers(3,1).Weights;
FullyConnected_Bias=net.Layers(3,1).Bias;
W.Wi=LSTM_W(1:HiddenLayersNum,:);
W.Wf=LSTM_W(HiddenLayersNum+1:2*HiddenLayersNum,:);
W.Wg=LSTM_W(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
W.Wo=LSTM_W(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
R.Ri=LSTM_R(1:HiddenLayersNum,:);
R.Rf=LSTM_R(HiddenLayersNum+1:2*HiddenLayersNum,:);
R.Rg=LSTM_R(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
R.Ro=LSTM_R(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
b.bi=LSTM_b(1:HiddenLayersNum,:);
b.bf=LSTM_b(HiddenLayersNum+1:2*HiddenLayersNum,:);
b.bg=LSTM_b(2*HiddenLayersNum+1:3*HiddenLayersNum,:);
b.bo=LSTM_b(3*HiddenLayersNum+1:4*HiddenLayersNum,:);
%LSTM - Layer
h_prev=zeros(HiddenLayersNum,1);%Output gate initial values (t-1)
c_prev=zeros(HiddenLayersNum,1);
i=1;
for i=1:length(X)
%Input Gate
z=W.Wi*X(:,i)+R.Ri*h_prev+b.bi;
I = 1.0 ./ (1.0 + exp(-z));%Input gate
%Forget Gate
f=W.Wf*X(:,i)+R.Rf*h_prev+b.bf;
F = 1.0 ./ (1.0 + exp(-f));%Forget gate
%Layer Input
g=W.Wg*X(:,i)+R.Rg*h_prev+b.bg;%Layer input
G=tanh(g);
%Output Layer
o=W.Wo*X(:,i)+R.Ro*h_prev+b.bo;
O = 1.0 ./ (1.0 + exp(-o));%Output Gate
%Cell State
c=F.*c_prev+I.*G;%Cell Gate
c_prev=c;
% Output (Hidden) State
h=O.*tanh(c);%Output State
h_prev=h;
% Fully Connected Layers
fc=FullyConnected_Weights*h+FullyConnected_Bias;
FC(:,i)=exp(fc)/sum(exp(fc)); %Softmax
end
[M,II] = max(FC);
YYY= categorical(II,[1 2 3 4 5]);%5 features
acc = sum(YYY == YY)./numel(YYY) %YY is the *reference* output data set used to calculate the accuracy of the LSTM when facing an unknown input data (X_test).
figure
plot(YYY,'.-')
hold on
plot(YY)
hold off
if true
% code
end
xlabel("Time Step")
ylabel("Activity")
title("Predicted Activities")
legend(["Predicted" "Test Data"])

카테고리

도움말 센터File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

질문:

2018년 4월 6일

답변:

2024년 11월 25일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by