# Given a matrix A^n. Comparing normal multiplication versus Diagonalization. I expect the former to be faster but its not in my case

조회 수: 1(최근 30일)
I have been learning diagonalization and SVD. I tried to show by code that doing A^n = A* A* A*......* n is slower than when we use the concept of diagonalization where A^n = P*D^n*P^-1
n = 2;
% given a matrix
A = rand(1000,1000);
% CASE 1
tic
product_1 = A^n;
T1=toc;
fprintf('%f is the time matrix multiplication \n',T1 );
% CASE 2
tic
[P, D]=eig(X);
product_2 = P * (D^n)/(P);
T2=toc;
fprintf('%f is the time for diagonalization :',T2 );
0.014834 is the time matrix multiplication
1.222272 is the time for diagonalization

로그인 to comment.

### 채택된 답변

Christine Tobler 6 Mar 2020
A^2 is just one matrix multiplication, A*A, which is much faster to do directly than the call to EIG.
For larger n, A^n isn't just doing the multiplication A*A*...*A. You can see the algorithm used by reading the mpower function, edit mpower.
To show a case where EIG is faster, you could increase n significantly, explcitly compute A*A*...*A instead of calling A^n, and have EIG return a vector of eigenvalues instead a dense diagonal matrix D. I think that should work to show an example where EIG is faster.
Note that there are numerical issues with using EIG, in cases where the matrix of eigenvectors P is close to singular. For example, try the EIG method on the matrix A = [3 1; 0 3]:
>> A*A
ans =
9 6
0 9
>> [P, D]=eig(A); product_2 = P * (D^2)/(P)
product_2 =
9 0
0 9
>> cond(P)
ans =
3.0024e+15

로그인 to comment.

### 추가 답변(1개)

James Tursa 6 Mar 2020
편집: James Tursa 6 Mar 2020
Well, the timings didn't meet your expectations. Why would you expect A*A to be slower than doing a full eig calculation followed up by two matrix multiplies and a matrix divide operator?
Also, did you mean for D^n to be D.^n for the comparison?

#### 댓글 수: 0

로그인 to comment.

이 질문에 답변하려면 로그인을(를) 수행하십시오.

Translated by