Enforce Constraints for PID Controllers
Learn how to apply a known constraint function to a PID control application using the Constraint Enforcement block. The block uses a quadratic programming (QP) solver that solves real-time optimization problems to find a control input, satisfying critical constraints on plant states. Apply this block to any control loop, especially with control strategies that do not allow you to specify critical constraints such as PID control and Reinforcement Learning.
Published: 3 Jun 2021
In this video, we look at how you can modify your controller actions to satisfy critical constraints and action bonds for control systems model Simulink. Specifically we will look at an example of enforcing constraints on a system states for PID controller application.
The Simulink model here, models the dynamics of a plant given by the system equations. Each of the subsystems here, model the dynamics for each of the states. This is a control group set up here with PID controllers that enable desired tracking of references for each of the plant states.
Now let's consider a controller objective, where the state of the plant together must strike a circular trajectory given by the set of equations here. Additionally, there is a constraint that either of the plants state should not exceed one.
Now we constructed the reference trajectory for each of the states from the equations that the PID controllers can use. When the simulation is run and the trajectory is plotted, you can see that the PID controllers do a good job of reference tracking.
However, the states x1 and x2 both exceed one while tracking the trajectory here violating the constraint. This is because the PID controller here can only directly control input to the plant and not in states. Now this is where the constraint enforcement block can help modify controller inputs to enforce these critical constraints on plants states.
This block introduced in R2021a version of similar control design, uses a quadratic programming solver to solve a real time optimization problem to find a controller input u that satisfies these constraints. So at every time step the constraint enforcement algorithm checks if the constraints are violated. And if they are, it selects an action closest to the normal action u not. So set the constraints are observed.
Here the coefficients of the constraint function effects and checks can be linear and nonlinear function to the plant states. C is the bond for the constraint function, while u min and u max are lower and upper bones of the control actions respectively.
If the coalition of the constraint function are already known, you can apply them directly to this block. However, if you're unable to derive the constraint function from the plant, you can use the input output data from experiments or simulations to learn the conditions. For example, by using a deep neural network.
For example, we know what the constraints are. So let's specify them. The feasible region for the plant is given by this constraint. So the next-state condition should satisfy the constraint. Now we can approximate the plant dynamics by using this equation. Ts is the damage step, which is the sample damage we have set in the model.
As the next state should not exceed one, we can apply the constraint to this equation. This gets us to constraint function in the form in which the block needs. Here fx g and c are given by these matrices, and we will feed these coefficients into the constraint enforcement block. In the model we have put up a game block here to represent the quotient fx, and MATLAB function block to represent gx.
Under the parameters of the block dialogue, let's set the number of constraints and actions to two, since we have two states to enforce constraints on, and two PID controller actions to bond. For constraints, let's use an external source. And we can connect a constant block with the vector one command one to represent the constant bond c from the constant function.
With the constraints defined in the block, we can hook this up between the controllers and the plant. The nominal control input u not, will feed in from the controllers and u star would be the modified actions feeding into the plant.
As you can see here, the model has been modified to incorporate the constant enforcement block, between the PID controllers and the plant.
In order to rerun the simulation, here we see the PID controller start tracking the desired trajectory, where the constraints are being satisfied. Then in this region exceeding one where the constraints will be violated, the constant enforcement block, successfully constrains the control actions of the PID controllers such that the plant states remain less than one.
So in summary, we saw how you can use the constant enforcement block, to modify control actions such that the plants states do not violate critical constraints. This block can be applied in any control loop, especially with control strategies, but it's not possible to specify critical constraints, such as PID controllers and reinforcement learning. Further, you can generate c or c+ plus code from this block, and deployed on