Eigenvalues and Eigenvector computation on extremely bad conditioned matrices

Hello everyone, I have a question about the eig function for computing the eigenvalues and eigenvector. I have a couple of matrices A and P, for which I want to solve the general eigenproblem A v = P lambda v: these matrices are 160x160, singular, not symmetric and extremely bad conditioned (cond(A) = 1e18, cond(P) = 1e28, rcond(A) = rcond(P) = 0), and with low rank (rank(A) = 150, rank(P) = 120). Clearly, being singular, i can not invert P and compute a standard eigenproblem inv(P)*A v = I lambda v, so i have to call the function eig with the general problem:
eig(A,P), which automatically choose the qz algorithm to solve the eigenvalues.
Unfortunately, more or less 40 of this poles are inf + 0i, and in some cases A v is not equal to P lambda v, so the problem is not solved with precision. The eigenvectors that correspond to the inf eigenvalues shows that in that case a non infinite eigenvalues can exist, of the order of 10^15: so why the eig function does not compute them?
I have tried to balance, normalized each rows of A, P for decrease the conditioning number, but the result does not change.
I have tried to use the pseudoinverse matrix of P for solving a standard problem eig(pinv(P)*A), and in this case I have no inf eigenvalues: however, I am not sure if the eigenvalues compute with this method are a solution of my original problem: can I solve them in this way? With a correct methodology I should solve eig(pinv(P)*A, pinv(P)*P), but the second matrix remains singular and so inf eigenvalues are computed also in this case.
Has anyone an idea of how solve this bad-conditioned problem?
Thanks in advance

답변 (2개)

You may consider employing the balance() function to execute diagonal scaling on the ill-conditioned matrix, thereby enhancing the conditioning of the eigenvalue problem. If this approach proves ineffective, it becomes necessary to resort to specialized algorithms such as the Implicitly Restarted Arnoldi Method (IRAM), which excel in dealing with ill-conditioned matrices.
Radke's Master thesis, presenting the MATLAB implementation of IRAM for tackling large-scale eigenvalue problems, is accessible via the following link:
% Ill-conditioned matrix
A = [-1 1e-2 1e-4;
1e+2 -1 1e-2;
1e+4 1e+2 -1];
[Va, Ea] = eig(A)
Va = 3×3
-0.0002 0.0001 -0.0000 0.0100 0.0100 -0.0068 0.9999 0.9999 1.0000
Ea = 3×3
-2.0000 0 0 0 1.0000 0 0 0 -2.0000
cond(Va)
ans = 8.1651e+03
[T, B] = balance(A)
T = 3×3
0.0020 0 0 0 0.1250 0 0 0 16.0000
B = 3×3
-1.0000 0.6400 0.8192 1.5625 -1.0000 1.2800 1.2207 0.7812 -1.0000
[Vb, Eb] = eig(B)
Vb = 3×3
-0.7102 0.4503 -0.1918 0.5548 0.7036 -0.6460 0.4334 0.5497 0.7388
Eb = 3×3
-2.0000 0 0 0 1.0000 0 0 0 -2.0000
cond(Vb)
ans = 1.4547

댓글 수: 7

Dear Sam,
Thanks for your reply!
  • The use of the balance function was one of my first trial to solve this problem. However, using [T,B] = balance(A) then cond(A) = Inf, so the problem is getting worse.
  • I think that the use specialized algorithms such as the Implicitly Restarted Arnoldi Method is the only way to solve this problem. Unfortunately, the speig function that could implement this method has never been implemented in MATLAB. Do you know some other functions or methods "ready-to-use" to solve this problem? I found a version of the IRAM method in Git, but it is not for generalized eigenproblem such as mine ( A v = P lambda v)
Try getting this sssMOR Toolbox from File Exchange, and then use eigs() to compute the eigenvalues of the sparse matrix. MATLAB also has a built-in function with identical name.
Thanks
I'm now comparing the results from the two methods:
  • the first using eig(A,P)
  • the second, using eigs(A,P,n,sigma) where the only possibility is to choose some first guess values for the eigenvalues since P is singular. I have chosen, for example, n=6 and by varying sigma, can i reconstruct the root locus in a range of frequency?
In the image below there is the comparison between the eig() solution and the eigs() solution: many poles are similar, but some of them are completely different. Moreover, there is a pole that have in one case a positive real part, and in other case a negative one. Which one should I trust more?
By comparing the residuals, i.e. by computing A v - P lambda v, both eigenvalues/eigenvectors pairs (computed with eig and eigs) seem solution of the eigenproblem, so why in your opinion there is this difference?
Honestly, I cannot tell whether eig() or eigs() should be trusted more. In the eigs() method, the accuracy of these initial guesses can significantly affect the results. Have you tried refining the initial guesses and tolerance settings to obtain more reliable eigenvalues?
Yes, but the results seem not sensibile to all the parameters as tolerance, number of iterations, initial guess (also giving as initial guess a pole computed by the eig function, it will give a different pole!), and the other controls
It's hard to say, but I think you must accept: garbage in, garbage out.
Previously, I mentioned the Arnoldi method. The Lanczos algorithm is quite well-known for the computation of eigenvalues. However, it appears that your interest lies not in developing the algorithm from scratch in MATLAB but in utilizing eigenvalue-finding tools from algorithm libraries capable of efficiently handling sparse matrices.
My colleague recommends using SuiteSparse, as it should prove effective for relatively small to moderately sized sparse matrices, such as the one you have (). You can find more information here:

댓글을 달려면 로그인하십시오.

Christine Tobler
Christine Tobler 2023년 10월 9일
If the matrix P has rank 120 and its size is 160, you should expect 40 eigenvalues to be Inf - this is how singularities in the second input matrix are being represented by generalized eig.
For the simple case where A and P are both diagonal, each eigenvalue would be just A(i, i) ./ P(i, i). So if there is a diagonal element of P that is zero, its eigenvalue is going to be Inf. This is usually fine, in many practical problems the Inf eigenvalues can simply be ignored.
So my question would be, is it really a problem that some of the computed eigenvalues are Inf? That probably depends on what your next steps are going to be with those computed results.

카테고리

도움말 센터File Exchange에서 Linear Algebra에 대해 자세히 알아보기

질문:

2023년 10월 4일

답변:

2023년 10월 9일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by