How to send values to workspace during reinforcement agent validation for further plot and analysis. Using "RUN" button on Simulink produces some difference from Validation.

조회 수: 1 (최근 30일)
I want to export specific values to workspace during the Agent validation to plot. I donot want to use the Simulink "RUN" button.
Reason: When I use the validate approach, the final value of a parameter is usually slightly different when I run the simulink using the "RUN" button. The different is significant to my analysis. Using the "ToWorkspace" does not ouput anything when I execute the validation script below. "ToWorkspace" only works when I use "RUN" on simulink. I want to get values to workspace during validation.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);

채택된 답변

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 2월 7일
편집: Emmanouil Tzorakoleftherakis 2023년 2월 7일
Hello,
First, to answer your point about the simulation differences between using the "Play" button vs using the "sim" command from Reinforcement Learning Toolbox:
1) The "sim" command will first run the 'reset' function that you have specified. Using the 'Play' button will not. That means that if you have any randomization in your reset function, for example changing the initial condition of your model, then seeing different results is expected.
2) Not sure which agent you are using, but some agents are stochastic. So even if you run the same simulation multiple times, unless you fix the random seed you will see different results.
For your second question on logging data to workspace when using the sim command: I just tested it myself and was able to get the data on my workspace. The main difference is that when you use the 'sim' command, the variables from 'ToWorkspace' block will be saved inside the experiences struct (output of sim), not directly in your workspace. If that's not the case, I would check whether the model simulates without error.
Hope this helps
  댓글 수: 2
Bay Jay
Bay Jay 2023년 2월 13일
I am using DDPG. I extracted the data from the experience struct for a single/fixed initial condition without randomization and compared with the "Play" option. I observed that the results are same.
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 2월 13일
편집: Emmanouil Tzorakoleftherakis 2023년 2월 13일
Which makes sense since DDPG is deterministic. Please accept the answer if the issue has been resolved

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by