필터 지우기
필터 지우기

Difference between using a rlContinuo​usGaussian​Actor and a rlContinuo​usDetermin​isticActor with a Gaussian Explorationmodel

조회 수: 1 (최근 30일)
Hi,
Someone please explain the difference between using a rlContinuousGaussianActor and using a rlContinuousDeterministicActor with a Gaussian Explorationmodel (namely the GaussianActionNoise) in reinforcement learning with e.g. a rlTD3agent.
With possibly using the rlContinuousGaussianActor using the Gaussian Explorationmodel would not be impossible: What is the point/ use case of this combination?
Thank you.
  댓글 수: 1
Jonas Woeste
Jonas Woeste 2022년 6월 25일
Despite rlContinuousGaussianActor not being applicable to rlTD3agent, some explanation about similarities of the GaussianActionNoise and the rlContinuousGaussianActor would be useful.

댓글을 달려면 로그인하십시오.

답변 (1개)

Manas
Manas 2023년 9월 8일
편집: Manas 2023년 9월 8일
Hi Jonas Woeste,
I understand that you wish to know the difference between "rlContinuousGaussianActor" and "rlContinuousDeterministicActor" with a Gaussian exploration model.
The choice between these two approaches depends on the specific problem and the trade-off between exploration and exploitation.
“rlContinuousGaussainActor” is suitable when exploration is crucial and stochastic actions are desired. “rlContinuousDeterministicActor” is suitable for providing a stable policy with controlled exploration. It strikes a balance between exploration and exploitation. To introduce exploration, a Gaussian exploration model like “GaussianActiveNoise” is used. The exploration model adds Gaussian noise to the actor's output action, making it slightly perturbed and allowing exploration.
In summary, the choice of the combination depends on the nature of the problem and the desired behaviour of the agent you are seeking.
Hope this helps!

제품


릴리스

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by