Why does covariance P matrix become non positive definite in Unscented Kalman Filter of MATLAB?
조회 수: 39 (최근 30일)
이전 댓글 표시
I'm doing Unscented Kalman Filter in MATLAB code and I have followed this tutorial how to create a UKF filter.
First I initilize the vector and covariance P matrix first. In MATLAB code, I just set them into random numbers.
Then I compute the sigma points
MATLAB code:
[xhati] = ukf_compute_sigma_points(xhat, P, a, k, row);
Then I'm using the observation function with the current sigma point . In this case, I say that
MATLAB code:
yhati = xhati;
Then I compute the estimated vector.
MATLAB code:
yhat = ukf_multiply_weights(yhati, WM, M);
I compute the covariance matrix .
MATLAB code:
Py = ukf_estimate_covariance(yhati, yhat, Wc, R, row);
Cross covariance matrix .
MATLAB code:
Pxy = ukf_estimate_cross_covariance(xhati, xhat, yhati, yhat, a, k, M);
I compute kalman gain matrix K by using Cholesky decomposition for every column of .
MATLAB code:
K = ukf_create_kalman_K(Py, Pxy, M);
I do state update and covaraince P update as well.
MATLAB code:
[xhat, P] = ukf_state_update(K, Py, P, xhat, y, yhat, M);
Now I'm computing the sigma points again because now I'm going to predict the future states. Notice that I DON'T take . Instead I assume that L from is the square root of where c is a scalar. The authors of Unscented Kalman Filter did that assumption.
The sigma points capture the same mean and covariance irrespective of the choice of matrix square root which is used.
Numerically efficient and stable methods such as the Cholesky decomposition[18] can be used.
MATLAB code:
xhati = ukf_compute_sigma_points(xhat, P, a, k, M);
Mow I'm suing the transition function with the new sigma points to get the future .
In this case the transition function is
MATLAB code:
xhati = ukf_transition(xhati, u, M);
And I compute the estimated states.
MATLAB code:
xhat = ukf_multiply_weights(xhati, WM, M);
And last I update the covariance matrix P, which will NOT become positive definite. That's the issue with my code and I don't know why.
**Question:**
When running this MATLAB code. why does covariance P get this covariance matrix.
P = [-0.028857 -0.583326
-0.583326 2.546039]
At the function
[xi] = ukf_compute_sigma_points(x, P, a, k, M)
We can clearly see that P is not positive definite. Why?
My theory.
If you look at the code
[xhat, P] = ukf_state_update(K, Py, P, xhat, y, yhat, M)
You can clearly see this statement
P = P - K*Py*K';
I did try to change the transpose to
P = P - K'*Py*K;
and now the code is working. But is this correct way? What do you think?
I have also a textbook about UFK where this line is used with P instead of
P = P - K*P*K';
Here is my complete MATLAB code:
function ukf_test()
% Initial states
y = [0; 0];
u = [1; 2];
xhat = [0; 0];
P = [5 0; 0 2];
Q = [1, 0; 0, 2];
R = [1.5, 0; 0, 2];
a = 1;
k = 2;
b = 3;
M = 2;
for i = 1:2
[xhat, y, P] = ukf(xhat, y, u, P, Q, R, a, k, b, M);
end
end
function [xhat, y, P] = ukf(xhat, y, u, P, Q, R, a, k, b, M)
column = 2 * M + 1;
row = M;
% Step 0 - Create the weights
[WM, Wc] = ukf_create_weights(a, b, k, row);
% UPDATE: Step 1 - Compute sigma points
[xhati] = ukf_compute_sigma_points(xhat, P, a, k, row);
% UPDATE: Step 2 - Use the nonlinear measurement function to compute the predicted measurements for each of the sigma points.
yhati = xhati; % Here we assume that the observation function y = h(x, u) = x
% UPDATE: Step 3 - Combine the predicted measurements to obtain the predicted measurement
yhat = ukf_multiply_weights(yhati, WM, M);
% UPDATE: Step 4 - Estimate the covariance of the predicted measurement
Py = ukf_estimate_covariance(yhati, yhat, Wc, R, row);
% UPDATE: Step 5 - Estimate the cross-covariance between xhat and yhat. Here i begins at 1 because xhati(0) - xhat(0) = 0
Pxy = ukf_estimate_cross_covariance(xhati, xhat, yhati, yhat, a, k, M);
% UPDATE: Step 6 - Find kalman K matrix
K = ukf_create_kalman_K(Py, Pxy, M);
% UPDATE: Step 7 - Obtain the estimated state and state estimation error covariance at time step
[xhat, P] = ukf_state_update(K, Py, P, xhat, y, yhat, M);
% PREDICT: Step 0 - Predict the state and state estimation error covariance at the next time step
xhati = ukf_compute_sigma_points(xhat, P, a, k, M);
% PREDICT: Step 1 - Use the nonlinear state transition function to compute the predicted states for each of the sigma points.
xhati = ukf_transition(xhati, u, M);
% PREDICT: Step 2 - Combine the predicted states to obtain the predicted states at time
xhat = ukf_multiply_weights(xhati, WM, M);
% PREDICT: Step 3 - Compute the covariance of the predicted state
P = ukf_estimate_covariance(xhati, xhat, Wc, Q, M);
end
function [WM, Wc] = ukf_create_weights(a, b, k, M)
column = 2 * M + 1;
WM = zeros(1, column);
Wc = zeros(1, column);
for i = 1:column
if(i == 1)
Wc(i) = (2 - a * a + b) - M / (a * a * (M + k));
WM(i) = 1 - M / (a * a * (M + k));
else
Wc(i) = 1 / (2 * a * a * (M + k));
WM(i) = 1 / (2 * a * a * (M + k));
end
end
end
function [xi] = ukf_compute_sigma_points(x, P, a, k, M)
column = 2 * M + 1;
compensate_column = 2 * M - 1;
row = M;
c = a * a * (M + k);
xi = zeros(row, column);
% According to the paper "A New Extension of the Kalman Filter to Nonlinear Systems"
% by Simon J. Julier and Jeffrey K. Uhlmann, they used L = chol(c*P) as "square root",
% instead of computing the square root of c*P. According to them, cholesky decomposition
% was a numerically efficient and a stable method.
L = chol(c*P, 'lower');
for j = 1:column
if(j == 1)
xi(:, j) = x;
elseif(and(j >= 2, j <= row + 1))
xi(:, j) = x + L(:, j - 1);
else
xi(:, j) = x - L(:, j - compensate_column);
end
end
end
function x = ukf_multiply_weights(xi, W, M)
column = 2 * M + 1;
row = M;
x = zeros(row, 1);
for i = 1:column
x = x + W(i)*xi(:, i);
end
end
function P = ukf_estimate_covariance(xi, x, W, O, M)
column = 2 * M + 1;
row = M;
P = zeros(row, row);
for i = 1:column
P = P + W(i)*(xi(:, i) - x)*(xi(:, i) - x)';
end
P = P + O;
end
function P = ukf_estimate_cross_covariance(xi, x, yi, y, a, k, M)
column = 2 * M + 1;
row = M;
c = 1 / (2 * a * a * (M + k));
P = zeros(row, row);
for i = 2:column % Begins at 2 because xi(:, 1) - x = 0
P = P + (xi(:, i) - x)*(yi(:, i) - y)';
end
P = c*P;
end
function K = ukf_create_kalman_K(Py, Pxy, M)
row = M;
K = zeros(row, row);
for i = 1:row
% Solve Ax = b with Cholesky
L = chol(Py, 'lower');
y = linsolve(L, Pxy(:, i));
K(:, i) = linsolve(L', y);
end
% This will work to K = linsolve(Py, Pxy);
end
function [xhat, P] = ukf_state_update(K, Py, P, xhat, y, yhat, M)
row = M;
xhat = xhat + K*(y - yhat);
P = P - K*Py*K';
end
function xhati = ukf_transition(x, u, M)
column = 2 * M + 1;
row = M;
xhati = zeros(row, column);
for i = 1:column
xhati(:, i) = transistion_function(x(:, i), u);
end
end
function dx = transistion_function(x, u)
dx = zeros(2, 1);
dx(1) = -2*x(1)*x(2) + 4*x(2) + 4*u(1);
dx(2) = -x(1) - 3*x(2) + 7*u(2);
end
댓글 수: 0
채택된 답변
Harun Topbas
2021년 9월 2일
편집: Harun Topbas
2021년 9월 2일
Hi Daniel,
I encountered this problem too, i can advice you to use square root unscented kalman algorithm, here is the ref;
Van der Merwe, Rudolph, and Eric A. Wan. “The Square-Root Unscented Kalman Filter for State and Parameter-Estimation.” 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 6:3461–64. Salt Lake City, UT, USA: IEEE, 2001. https://doi.org/10.1109/ICASSP.2001.940586.
댓글 수: 20
Harun Topbas
2021년 9월 15일
편집: Harun Topbas
2021년 9월 15일
Hi Daniel,
Sorry i am late, i have no idea this algoritm is useful for multivariable systems or traininng deep neural network. I used for radar measurement which is tracking a target and measurement model is highly nonlinear.
This algorithm did great job for my problem. I coded the algorithm in C. I revised a little the m file in the link above;
- Instead of cholupdate(A,x,'+-') -> chol(A'*A '+-' x*x'), I used your chol() and qr() function in github :).
- In calculation kalman gain step, (line 69 in ukf.m) K = P12*inv(S2'*S2). But instead of inverse operation, you better use linear solver for computation efficiency.
then it worked for me. I can share my code with you if you want.
Good luck
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!