Eigenvalues and Eigenvector computation on extremely bad conditioned matrices
이전 댓글 표시
Hello everyone, I have a question about the eig function for computing the eigenvalues and eigenvector. I have a couple of matrices A and P, for which I want to solve the general eigenproblem A v = P lambda v: these matrices are 160x160, singular, not symmetric and extremely bad conditioned (cond(A) = 1e18, cond(P) = 1e28, rcond(A) = rcond(P) = 0), and with low rank (rank(A) = 150, rank(P) = 120). Clearly, being singular, i can not invert P and compute a standard eigenproblem inv(P)*A v = I lambda v, so i have to call the function eig with the general problem:
eig(A,P), which automatically choose the qz algorithm to solve the eigenvalues.
Unfortunately, more or less 40 of this poles are inf + 0i, and in some cases A v is not equal to P lambda v, so the problem is not solved with precision. The eigenvectors that correspond to the inf eigenvalues shows that in that case a non infinite eigenvalues can exist, of the order of 10^15: so why the eig function does not compute them?
I have tried to balance, normalized each rows of A, P for decrease the conditioning number, but the result does not change.
I have tried to use the pseudoinverse matrix of P for solving a standard problem eig(pinv(P)*A), and in this case I have no inf eigenvalues: however, I am not sure if the eigenvalues compute with this method are a solution of my original problem: can I solve them in this way? With a correct methodology I should solve eig(pinv(P)*A, pinv(P)*P), but the second matrix remains singular and so inf eigenvalues are computed also in this case.
Has anyone an idea of how solve this bad-conditioned problem?
Thanks in advance
답변 (2개)
Hi @Andrea
You may consider employing the balance() function to execute diagonal scaling on the ill-conditioned matrix, thereby enhancing the conditioning of the eigenvalue problem. If this approach proves ineffective, it becomes necessary to resort to specialized algorithms such as the Implicitly Restarted Arnoldi Method (IRAM), which excel in dealing with ill-conditioned matrices.
Radke's Master thesis, presenting the MATLAB implementation of IRAM for tackling large-scale eigenvalue problems, is accessible via the following link:
% Ill-conditioned matrix
A = [-1 1e-2 1e-4;
1e+2 -1 1e-2;
1e+4 1e+2 -1];
[Va, Ea] = eig(A)
cond(Va)
[T, B] = balance(A)
[Vb, Eb] = eig(B)
cond(Vb)
댓글 수: 7
Andrea
2023년 10월 5일
Sam Chak
2023년 10월 5일
Hi @Andrea
Try getting this sssMOR Toolbox from File Exchange, and then use eigs() to compute the eigenvalues of the sparse matrix. MATLAB also has a built-in function with identical name.
Andrea
2023년 10월 5일
Sam Chak
2023년 10월 6일
Hi @Andrea
Honestly, I cannot tell whether eig() or eigs() should be trusted more. In the eigs() method, the accuracy of these initial guesses can significantly affect the results. Have you tried refining the initial guesses and tolerance settings to obtain more reliable eigenvalues?
Andrea
2023년 10월 6일
Torsten
2023년 10월 6일
It's hard to say, but I think you must accept: garbage in, garbage out.
Sam Chak
2023년 10월 6일
Hi @Andrea
Previously, I mentioned the Arnoldi method. The Lanczos algorithm is quite well-known for the computation of eigenvalues. However, it appears that your interest lies not in developing the algorithm from scratch in MATLAB but in utilizing eigenvalue-finding tools from algorithm libraries capable of efficiently handling sparse matrices.
My colleague recommends using SuiteSparse, as it should prove effective for relatively small to moderately sized sparse matrices, such as the one you have (
). You can find more information here:
Christine Tobler
2023년 10월 9일
0 개 추천
If the matrix P has rank 120 and its size is 160, you should expect 40 eigenvalues to be Inf - this is how singularities in the second input matrix are being represented by generalized eig.
For the simple case where A and P are both diagonal, each eigenvalue would be just A(i, i) ./ P(i, i). So if there is a diagonal element of P that is zero, its eigenvalue is going to be Inf. This is usually fine, in many practical problems the Inf eigenvalues can simply be ignored.
So my question would be, is it really a problem that some of the computed eigenvalues are Inf? That probably depends on what your next steps are going to be with those computed results.
카테고리
도움말 센터 및 File Exchange에서 Linear Algebra에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
