Is the rank function reliable?
조회 수: 8 (최근 30일)
이전 댓글 표시
In doing econometrics i need to check if a spicy numerically determined matrix has full rank.
Is it reliable to just evaluate rank or should one do tricks as rank(A)=rank(A'A) so that one can evaluate det(A'A) and see if it is close to zero. If so, when is the determinant so close to zero that one would consider it to not have full rank?
Thanks!
p.s. I'm new to this forum, sorry if you broke some rules and I really don't know which tag to apply.
댓글 수: 0
채택된 답변
John D'Errico
2014년 5월 28일
편집: John D'Errico
2014년 5월 28일
Many things to discuss here.
1. Yes, rank is quite reliable. Is it perfect? Well, no numerical method can be perfect. One can trivially generate a matrix that is not singular, but in double precision determining the rank using any simple scheme will fail to determine the singular nature properly.
A = diag([1 1e-20])
A =
1 0
0 1e-20
rank(A)
ans =
1
So A is clearly diagonal, with non-zero elements, but in double precision, it is numerically singular as far as rank can tell.
However, rank IS essentially as good a scheme as you can get in double precision. If rank tells you the matrix is singular in double precision, then effectively it is so. Anyway, floating point arithmetic limits what you can know about any such matrix.
2. Can you use cond instead of rank? Of course you can, but if rank tells you the matrix has full rank, then cond would do so too. The extra thing that cond tells you is how CLOSE the matrix is to singular, but you still need to make a judgment about the singularity. Similarly, looking at the singular values of the matrix would tell you something, IF you understand what you are looking at.
Regardless, just use rank. For the other tools that require judgement about how close a number is to big, if you don't know what that number should be, then you are FAR better off to just use rank! (My brother-in-law once asked me what a certain tool did, and if he needed it in his shop. My response was essentially that if he had no idea what the tool was, he did not need it.)
3. Evaluating det(A'*A), and wondering how close it need to be to zero is INSANE. The determinant is NEVER a good test of singularity. This is a myth perpetrated on students by authors of textbooks and papers, authors who sadly don't know any better. They keep reading papers by other authors of the same ilk, and repeating what they see. Of course, this is an infinite loop.
The determinant is NEVER a good test of singularity. PERIOD. PERIOD squared.
Once can trivially create a matrix that has any determinant you wish, simply by scaling that matrix by any arbitrary constant. Clearly scaling a matrix by a constant has NO effect on the rank of that matrix, however it DOES strongly affect the determinant. In fact, for an NxN matrix A
det(A) == 1
then
det(k*A) == k^N
So if k is small, then det(k*A) will be tiny! Likewise, if k is large, then det(k*A) will be immense. Since the value of k here has no impact on the singularity of k*A, clearly the det is meaningless to know if the matrix is singular. PERIOD. I don't care how many papers you have read by authors who have no understanding of the issues, how many books. There are a lot of fools (sorry, but true) out there who will endlessly repeat what they have read, not understanding the mathematics.
4. Of course, computing det(A'*A) is worse yet. Here we are squaring the condition of the matrix, so if we could not determine if A was singular using det, why in the name of god and little green apples would forming A'*A help?
5. Similarly, comparing rank(A) to rank(A'*A) is meaningless.
A = diag([1 1e-10]);
rank(A)
ans =
2
rank(A'*A)
ans =
1
Was A singular? Of course not. Rank told you so, but you can screw things up by forming A'*A when there was no need to do so. In fact, it is almost NEVER a good idea to form A'*A. Almost any problem that "needs" A'*A will almost always be better served if you read some of your old linear algebra 101 notes, in terms of a factorization of A.
DON'T compute A'*A. Instead, you might compute the QR factorization of A, then use that to form what is essentially a Cholesky factorization of A'*A, BUT keep it in factored form! I.e., if
A = Q*R
then
A'*A = R'*Q'*Q*R = R'*R
Again, DON'T compute R'*R. That would just get you in the same hot water as before. Keep R'*R in factored form. Anything you need to do with that can be gotten from R, and more efficiently AND more accurately. (READ YOUR NOTES! If you never took linear algebra, it is high time to do so, or at least read a book on the subject if you are asking questions like this.
Sorry about the rant. It is not directed at you personally. But if nobody ever tells you the issues and the reasons to avoid things like det, etc., you will never learn, and you will likely perpetuate the myth. There are GOOD numerical schemes to be found for many of the things we do. At the same time, there are many schemes out there that are numerically useless or worse, because they are dangerous if people use them without understanding the issues.
set('soapbox','off','rant','off')
댓글 수: 2
Cedric
2014년 5월 29일
편집: Cedric
2014년 5월 29일
Bookmarked, printed as PDF, commented-to-keep-track, almost sent to my smart phone! I also voted for the question; not a lot of people, even on this forum, could have given you such an answer. This is typically a case where many of us, after reading, will think "I'm glad he asked".
추가 답변 (1개)
Matt J
2014년 5월 29일
편집: Matt J
2014년 5월 29일
Regardless of what you decide about rank as a measure of singularity, I prefer not to use Matlab's RANK command to compute it, or at least not with the default tolerance on the singularity. It has a very funny way of choosing this default,
s = svd(A);
if nargin==1
tol = max(size(A)) * eps(max(s));
end
r = sum(s > tol);
Why the tolerance should be set relative to size(A) has always eluded me.
I use the function below which is somewhat similar to using cond() as John suggested. In addition to estimating rank, it can also pick out the linearly independent columns of the matrix and their locations for you, as additional output arguments.
function [r,idx,Xsub]=lindep(X,tol)
%Extract a linearly independent set of columns of a given matrix X
%
% [r,idx,Xsub]=lindep(X)
%
%in:
%
% X: The given input matrix
% tol: A rank estimation tolerance. Default=1e-10
%
%out:
%
% r: rank estimate
% idx: Indices (into X) of linearly independent columns
% Xsub: Extracted linearly independent columns of X
if ~nnz(X) %X has no non-zeros and hence no independent columns
r=0; Xsub=[]; idx=[];
return
end
if nargin<2, tol=1e-10; end
[Q, R, E] = qr(X,0);
diagr = abs(diag(R));
%Rank estimation
r = find(diagr >= tol*diagr(1), 1, 'last'); %rank estimation
if nargout>1
idx=sort(E(1:r));
idx=idx(:);
end
if nargout>2
Xsub=X(:,idx);
end
댓글 수: 0
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!