Initially I was using LQR to regulate the error dynamics, i.e., I computed the gains for de = (A - BK)e, but this basically results in a PI controller since the control law (with integral action) is u = -K*e + ki*z. I have seen many sources teaching how to do it by augmenting the state vector with the integral of the error, expanding the matrices to A = [A 0; C 0], etc. but I still can't understand how that works. I am working on a first order cruise control problem. From my observations the integral action is doing all the tracking and the -Kx term is only getting in the way, trying to regulate the state x to zero. Here is my code:
A = -(X_u/m + 2*X_uu/m*x0);
A_hat = [A zeros(n,1);...
K_hat = lqr(A_hat, B_hat,Q,R)
K_hat =
1.0e+03 *
3.5893 -1.4142
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
AA = [A - B*K B*ki;-C 0];
sys_cl = ss(AA, BB, CC, DD);
r = 25*1.852/3.6*ones(size(t));
x0_hat = [x0,0]
x0_hat =
10.2889 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
[y,t,x_hat] = lsim(sys_cl,r,t,x0_hat);
plot(t, y*3.6/1.852, 'k', 'LineWidth', 1.5,'Color','k')
title('Closed loop response with integrator')
u_effort = -K*x_hat(:,1) + ki*x_hat(:,2);
plot(t, u_effort,'Color','k')