Confused about Karhunen-Loeve Transform

조회 수: 17 (최근 30일)
Kyle
Kyle 2011년 9월 26일
댓글: Jaime De La Mota Sanchis 2021년 5월 27일
Hi all, I've read lots of documents about KLT, but still get confused on how to correctly apply it to a data set. My references:
To sum up about the first link, the step to calculate the KLT of a matrix x = [x1(t); x2(t); x3(t), ... xn(t)] in which xn(t) is a time series
Step1: auto-correlation matrix--> R = x*x';
Step2: calculate transformation matrix Phi_H--> [V,D] = eig( R); Phi_H = V';
Step3: transformed matrix--> y = Phi_H * x
Then apply this algorithm to the example at link2, to transform a matrix x = [1 2 4; 2 3 10]; The transformation matrix is not right according to that done by Mathematica, neither does the final result.
Could anyone please revise my algorithm applying to transform a time-series matrix?
Thank you for any suggestions Kyle
  댓글 수: 2
Daniel Shub
Daniel Shub 2011년 9월 26일
I always confuse the differences between KLT, SVD, and PCA. Do you need to subtract the mean in KLT?
Kyle
Kyle 2011년 9월 27일
They are really similar, wiki states that KLT is "also called" SVD. In some code SVD removes the mean, but I think KLT doesn't need this step.

댓글을 달려면 로그인하십시오.

채택된 답변

UJJWAL
UJJWAL 2011년 9월 26일
편집: Walter Roberson 2015년 8월 24일
Ok.... Let me put it down one by one...
Karhunen Loeve Transform relies on the covariance matrix of a set of observation vectors. The basic Algorithm is clearly explained in the first link you have posted. I am below giving the code for the KLT for the same example as given in the MATHEMATICA Example which you have mentioned :-
clc
clear all
y=[1,2,4;2,3,10];
y=y' %Reasons for transposing will become clear when you will read the second point given below.
[V,D]=eig(cov(y))
KLT = V' * y';
Note the following points :-
a) For a given Eigenvalue there can be multiple EigenVectors which will be multiples of each other. Now if you run the program then please observe the following points :-
(i) Take a look at the transformation matrix returned by mathematica and the matrix V(which is the same as the transformation matrix returned by MATLAB). They are essentially the same except that one of the eigenvectors has been multiplied by -1 which is perfectly acceptable because even by multiplication by -1 it is still an eigenvector. It is this difference in the transformation matrices used by the two softwares that results into different values of the KVT. Also note that one of the eigenvectors in both the softwares is the same and so is one of the rows in the KVT. So Essentially the difference is there just because of the multiplication by -1 and there is nothing to worry about these results. If mathematics allows it then nothing can refute that.
(ii) Also note that in MATHEMATICA the observations are taken row-wise while in MATLAB they are taken Column-wise.It is because of that I have transposed y in the above code.
Please remember that each software calculates a quantity by a certain algorithm and minute differences may arise. The only important thing is mathematical consistency. Hope this clears your doubt.
Happy To help.
UJJWAL
  댓글 수: 2
SGUNITN
SGUNITN 2020년 10월 30일
I was reading some paper where KLT is used for template protection.
Could you please share your view if it can be used for protecting a template like below.
[481.28 404.27 845.21 311.36 363.94 92.913 375.61 2647.8 2479.8 99.213 13.647 407.16 795.75 3121]
Jaime De La Mota Sanchis
Jaime De La Mota Sanchis 2021년 5월 27일
As far as I understand, the KLT should be orthogonal, but as far as I can tell, they aren't. Am I wrong?

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Operating on Diagonal Matrices에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by