rlStochasticActorPolicy
Policy object to generate stochastic actions for custom training loops and application deployment
Since R2022a
Description
This object implements a stochastic policy, which returns stochastic actions given
an input observation, according to a probability distribution. You can create an
rlStochasticActorPolicy
object from an rlDiscreteCategoricalActor
or rlContinuousGaussianActor
, or extract it from an rlPGAgent
, rlACAgent
, rlPPOAgent
, rlTRPOAgent
, or rlSACAgent
. You can
then train the policy object using a custom training loop or deploy it for your application
using generatePolicyBlock
or generatePolicyFunction
. If UseMaxLikelihoodAction
is set to
1
the policy is deterministic, therefore in this case it does not
explore. For more information on policies and value functions, see Create Policies and Value Functions.
Creation
Description
creates the stochastic policy object policy
= rlStochasticActorPolicy(actor
)policy
from the continuous
Gaussian or discrete categorical actor actor
. It also sets the
Actor
property of policy
to the input
argument actor
.
Properties
Object Functions
generatePolicyBlock | Generate Simulink block that evaluates policy of an agent or policy object |
generatePolicyFunction | Generate MATLAB function that evaluates policy of an agent or policy object |
getAction | Obtain action from agent, actor, or policy object given environment observations |
getLearnableParameters | Obtain learnable parameter values from agent, function approximator, or policy object |
reset | Reset environment, agent, experience buffer, or policy object |
setLearnableParameters | Set learnable parameter values of agent, function approximator, or policy object |
Examples
Version History
Introduced in R2022a
See Also
Functions
getGreedyPolicy
|getExplorationPolicy
|generatePolicyBlock
|generatePolicyFunction
|getAction
|getLearnableParameters
|setLearnableParameters
Objects
rlMaxQPolicy
|rlEpsilonGreedyPolicy
|rlDeterministicActorPolicy
|rlAdditiveNoisePolicy
|rlHybridStochasticActorPolicy
|rlDiscreteCategoricalActor
|rlContinuousGaussianActor
|rlPGAgent
|rlACAgent
|rlSACAgent
|rlPPOAgent
|rlTRPOAgent