Use the PyTorch Model Predict function block in the matlab2024a simulink module to import a neural network trained using pytorch.

조회 수: 28 (최근 30일)
Hi everyone, I am now going to use PyTorch Model Predict in matlab2024a,simulink to import the neural network trained in pytorch into simulink for robot dynamics control. My current environment configuration is: matlab2024a,conda environment python environment, interpreter version 3.9, pytorch version using the cpu version, the neural network has also been saved as cpu version.
Checking python at the matlab command line:
>>pyenv
ans =
PythonEnvironment - 属性:
Version: "3.9"
Executable: "C:\Users\64375\.conda\envs\conda_env_39\python.exe"
Library: "C:\Users\64375\.conda\envs\conda_env_39\python39.dll"
Home: "C:\Users\64375\.conda\envs\conda_env_39"
Status: Loaded
ExecutionMode: InProcess
ProcessID: "21020"
ProcessName: "MATLAB"
Import the neural network policy_net_matlab.pth file in the simulink module following the official tutorial, using the function torch.load(). The input and output dimensions are set to match the network parameters. No pre-processing and post-processing functions are loaded. Because they are not used. Clicking Run reports the following error as shown in the image:
  댓글 수: 2
Malay Agarwal
Malay Agarwal 2024년 8월 19일
Hi @Daisy,
Could you please share the PyTorch model saved as a ".pth" file and any other Python files you might be using here?
Daisy
Daisy 2024년 8월 19일
Thanks, here is the network model file I used policy_net_matlab.pth;I designed the network structure as shown in the code below:
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
class Policy(nn.Module):
def __init__(self, state_dim, action_dim, min_log_sigma=-20, max_log_sigma=2):
super(Policy, self).__init__()
self.layer1 = nn.Linear(state_dim, 256)
self.layer2 = nn.Linear(256, 256)
self.layer3 = nn.Linear(256, 256)
self.mu = nn.Linear(256, action_dim)
self.log_sigma = nn.Linear(256, action_dim)
self.min_log_sigma = min_log_sigma
self.max_log_sigma = max_log_sigma
def forward(self, x):
x = F.leaky_relu(self.layer1(x), 0.1)
x = F.leaky_relu(self.layer2(x), 0.1)
x = F.leaky_relu(self.layer3(x), 0.1)
mu = self.mu(x)
log_sigma = self.log_sigma(x)
log_sigma = torch.clamp(log_sigma, self.min_log_sigma, self.max_log_sigma)
return mu, log_sigma
In fact when using the network for prediction, it is only necessary to use the network to compute the mu value.
Also I didn't use any of the other python files.Do I need to load any other python files, and if so, where should I load them, thanks!

댓글을 달려면 로그인하십시오.

답변 (3개)

Malay Agarwal
Malay Agarwal 2024년 8월 20일
편집: Malay Agarwal 2024년 8월 20일
Hi @Daisy,
According to the documentation for the "PyTorch Model Predict Block", when you specify the "Load command" block parameter as "torch.load()" or "load_state_dict()", you also need to define the PyTorch model class in a ".py" file before you save the model using "torch.save()": https://www.mathworks.com/help/releases/R2024a/deeplearning/ref/pytorchmodelpredict.html#mw_66980306-4b5e-4d9d-a546-01414e3e91ae:~:text=If%20you%20select%20torch.load()%20or%20load_state_dict()%2C%20you%20must%20define%20the%20PyTorch%20model%20class%20in%20a%20.py%20file%20before%20you%20save%20the%20model%20using%20torch.save().
In other words, you need to have the file that defines your model class in the same directory as the ".pth" file. For example, here is my directory structure:
And this is what my "ptmodel.py" looks like:
import torch
import torch.nn as nn
import torch.nn.functional as F
class PolicyNN(nn.Module):
def __init__(self, state_dim, action_dim, min_log_sigma=-20, max_log_sigma=2):
super().__init__()
self.layer1 = nn.Linear(state_dim, 256)
self.layer2 = nn.Linear(256, 256)
self.layer3 = nn.Linear(256, 256)
self.mu = nn.Linear(256, action_dim)
self.log_sigma = nn.Linear(256, action_dim)
self.min_log_sigma = min_log_sigma
self.max_log_sigma = max_log_sigma
def forward(self, x):
x = F.leaky_relu(self.layer1(x), 0.1)
x = F.leaky_relu(self.layer2(x), 0.1)
x = F.leaky_relu(self.layer3(x), 0.1)
mu = self.mu(x)
log_sigma = self.log_sigma(x)
log_sigma = torch.clamp(log_sigma, self.min_log_sigma, self.max_log_sigma)
return mu, log_sigma
model = PolicyNN(20, 256)
# Insert your training code
torch.save(model, "ptmodel.pth")
Additionally, your "forward" function outputs two values and thus, the block needs to have two outputs as shown in the image below. You can increase the number of outputs for the block by using the "Outputs" tab in the block parameter dialogue.
I am attaching a complete working example as a ZIP file to the answer.
Some other information that might be useful:
  • Python version is 3.10.4.
  • torch version is 2.0.1.
Hope this helps!
  댓글 수: 2
Daisy
Daisy 2024년 8월 20일
Thank you, the file you provided works on my computer, but after replacing the file with my .pth file it still has the above problem(ptmodel1.pth), the test.py file in the attachment shows the code of how I saved the file(ptmodel1.pth), and then I put the kept file in the matlab folder and the programme still doesn't run effectively. Can you record a short video of the operation, thanks!
Malay Agarwal
Malay Agarwal 2024년 8월 21일
편집: Malay Agarwal 2024년 8월 21일
Upon further research, it seems to me that even the directory structure while loading the model needs to match the directory structure while saving the model. This is due to the peculiarities of Python's pickle module and is also mentioned in PyTorch's documentation: https://pytorch.org/tutorials/beginner/saving_loading_models.html#:~:text=The%20disadvantage%20of%20this.
My example works since the directory structure while saving the model is the same as the directory structure while loading the model in Simulink. If you have, for example, moved your training file to some other directory after saving the model and then tried loading it, this is unlikely to work. You also need to make sure nothing changes about the directory structure between each run of the simulation.
For a more robust and easier solution, I suggest using the "state_dict" (https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict) method to save and load the model. I have attached a working example which uses the "state_dict" method to the answer.
In the example, I have the following directory structure:
The file "policy.py" has the model class "PolicyNN" defined in it and this is what "train.py" looks like:
import torch
from policy import PolicyNN
model = PolicyNN(20, 256)
# Insert training code
torch.save(model.state_dict(), "ptmodel.pth") # Note how model.state_dict() is saved
The model looks as follows:
Few things to note here:
Finally, you need to make sure the module "policy" is importable by Python. To do this, you need to add the working directory to Python's "sys.path" variable. Thus, I have defined an "InitFcn" in Callbacks (Right Click > Model Properties > Callbacks), which is shown below. It simply inserts the path to the working directory to "sys.path":
insert(py.sys.path, int32(0), "path\to\working\directory\")
Hope this helps!

댓글을 달려면 로그인하십시오.


Daisy
Daisy 2024년 8월 21일
yes, thank you very much for your solution. Now I think my neural network is ready to run in simulink. Because I gave a constant vector as input to the neural network prediction module (in my_sim I used xxx defined in the workspace as input), the prediction module works fine. But when I want to use the feedback signal as the state input of the neural network inside simulink for the next prediction, the program has a new error, even though I added the memory module to eliminate the algebraic loop. I think we are very close to the right answer, but the new problem is just as frustrating. In the attached file is my simulink model, please run the parameter.m file before running simulink, could you please continue to solve the new problem, I would appreciate it, thank you!
  댓글 수: 2
Malay Agarwal
Malay Agarwal 2024년 8월 22일
편집: Malay Agarwal 2024년 8월 22일
Hi @Daisy,
In the two "Constant" blocks being used, you need to uncheck "Interpret vector parameters as 1-D" in the block parameters dialogue so that the blocks correctly output multidimensional vectors and feed them to the model. For example:
Please do this for both the constant blocks. You should be able to simulate the model after this.
Hope this helps!
Daisy
Daisy 2024년 8월 22일
Yes, now already my simulation model can run successfully, thank you very much for your answer, I hope our dialogue can be seen by more people, so that it can provide some technical references for more researchers in this research direction.

댓글을 달려면 로그인하십시오.


Don Mathis
Don Mathis 2024년 8월 28일
Hello @Daisy,
I think the original error message referring to the '__main__' module occurred because, before saving your model, your pytorch model class was defined in the '__main__' module instead of in a module with a filename. This happens if you execute your class definition code in a python script, or as top-level code in a jupyter notebook. When the model class is defined in '__main__', the Simulink block can't find the definition at runtime.
The solution is to 'import' the module file that defines your pytorch model class before creating and saving your model. This defines the class in a module whose name is that filename.
For example, if your put your original code in a file named 'ptmodel.py', you could create and save your model like this:
from ptmodel import Policy
import torch
model = Policy(20, 256)
# Insert your training code
torch.save(model, "ptmodel.pth")
In my experience, it is not necessary to have the directory structure match between model creation time and model usage time. The ptmodel.py file must be on the python path at usage time.

카테고리

Help CenterFile Exchange에서 Simulink Functions에 대해 자세히 알아보기

제품


릴리스

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by