Main Content

Constraint Enforcement for Control Design

Some control applications require the controller to select control actions such that the plant states do not violate certain critical constraints. In many cases, the constraints are on plant states that the controller does not control directly. Instead, you define a constraint function that defines the constraint in terms of the control action signal. This constraint function can be a known relationship or one that you must learn from experimental data.

Constraint Enforcement Block

The Constraint Enforcement block, which requires Optimization Toolbox™ software, computes the modified control actions that are closest to specified control actions subject to constraints and action bounds. The block uses a quadratic programming (QP) solver to find the control action u that minimizes the function |uu0|2 in real time. Here, u0 is the unmodified control action from the controller.

The solver applies the following constraints to the optimization problem.

fx+gxucuminuumax

Here:

  • fx and gx are coefficients of the constraint function which depend on the plant states x.

  • c is a bound for the constraint function.

  • umin is a lower bound for the control action.

  • umax is an upper bound for the control action.

Since the Constraint Enforcement block modifies the original control action, the final closed-loop system might not achieve the design objectives of the original controller, such as stability margins.

You must verify that the combined controller and Constraint Enforcement block meet your original control objectives. If the system does not meet your original objectives, consider updating your original controller design. For example, you can add additional gain and phase margins to compensate for any potential performance degradation.

Constraint Function Coefficients

Depending on your application, the coefficients fx and gx of the constraint function can be linear or nonlinear functions of the plant states and can be either known or unknown.

For an example that uses known nonlinear constraint function coefficients, see Enforce Constraints for PID Controllers. This example derives the constraint function from the plant dynamics.

When you are unable to derive the constraint function from the plant directly, you must learn the coefficients using input/output data from experiments or simulations. To learn such constraints, you can create a function approximator and tune the approximator to reproduce the input-to-output mapping from simulation or experimental data.

To learn linear coefficient functions, you can find a least-squares solution from the data. For examples that use this approach, see Train RL Agent for Adaptive Cruise Control with Constraint Enforcement and Train RL Agent for Lane Keeping Assist with Constraint Enforcement.

For nonlinear coefficient functions, you must tune a nonlinear function approximator. Examples of such approximators include:

  • Deep neural networks (requires Deep Learning Toolbox™ software)

  • Nonlinear identified system models (requires System Identification Toolbox™ software)

  • Fuzzy inference systems (requires Fuzzy Logic Toolbox™ software)

For examples that learn nonlinear coefficient function by training a deep neural network, see Learn and Apply Constraints for PID Controllers and Train Reinforcement Learning Agent with Constraint Enforcement.

More About Constraint Enforcement

For more information on constraint enforcement, play the video. This video is part of the Learning-Based Control video series.

An Introduction to Constraint Enforcement

See Also

Blocks

Related Topics