Why is it that when I add an agent to simulink, I get an error indicating that I cannot change the properties

조회 수: 7 (최근 30일)
When I open an official case, such as by comparing pid and ddpg to control the height of water in the tank, the official original file can be run. But when I will remove the agent of the official case and manually add an "agent" myself, after clicking "Run" in the mlx file, the following error will be reported
ill usage rl.train.SeriesTrainer/run
'RL_dianyesifu_DDPG_test/RL Agent' 中出错: 无法计算封装初始化命令。
error rl.train.TrainingManager/train (第 479 行)
run(trainer);
erro rl.train.TrainingManager/run (第 233 行)
train(this);
erro rl.agent.AbstractAgent/train (第 136 行)
trainingResult = run(trainMgr,checkpoint);
reason:
ill usage rl.env.internal.reportSimulinkSimError
This cannot be changed while the simulation is running 'rlwatertank/RL Agent/RL Agent' the stats of 'Tunable'
I am a beginner, please give me some advice, thank you!
  댓글 수: 2
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2024년 1월 9일
Did you update the agent variable everywhere? Make sure to update it on the RL Agent block in the Simulink model as well
Mxolisi
Mxolisi 2024년 10월 8일
Hi Emmanouil, I am having the same problem. How do we update these variables?

댓글을 달려면 로그인하십시오.

답변 (1개)

Harsh
Harsh 2025년 3월 21일
Hi @cm s,
Assuming you are referring to the “mlx” file available with the official example from the following documentation page -
To use your own agent in the above example you can load it programmatically from a “mat” file and then modify the “sim” function to use that agent. Below are the changes that can be used to use a different agent than the one given in example.
In the “Validate Trained Agent” section –
rng(1)
loadAgentData = load("myAgent.mat");
myAgent = loadAgentData.myAgent;
Then use this “myAgent” with the “sim” function –
experiences = sim(env,myAgent,simOpts);
Also modify the “RL Agent” block in the model by double clicking on it and then enter your agent in the “Agent object” field. Note that you need to first load the agent in base workspace to use it in the “RL Agent” block. Below is the snapshot of “Block Parameters” window for “RL Agent” block –
If you want to understand why you got an error, please share the changes that you made to the "mlx" file before running it.

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by