Matrix multiplication accuracy problem

조회 수: 42 (최근 30일)
Petri M
Petri M 2021년 1월 21일
편집: James Tursa 2021년 1월 27일
I made a function that calculates autocorrelation using partial matrices. When I tried to verify it by comparing with autocorrelation using whole matrices, I thought I had made a mistake, but then I found out Matlab gives different results depending on how the matrix is separated into parts. However, there should be no difference between the calculations!
Now, do not give me mere floating point arithmetic as the explanation: It does NOT explain this difference.
Here is a very simple test that shows the problem:
a1=randn(100,1); a2=randn(100,1);
A=[a1 a2];
R0=A'*A;
R1=[a1 a2]'*[a1 a2];
R2=[a1'*a1 a1'*a2; a2'*a1 a2'*a2];
[R0-R1 R2-R1]
ans =
1.0e-13 *
0.1421 -0.0089 0.2842 -0.0089
-0.0089 -0.2842 -0.0089 0
If you know anything about matrix algebra, you should know that R0 is internally calculated EXACTLY in the same way as R1 and R2 above. So, floating point arithmetic as such does not explain the difference.
So, obviously Matlab must be rounding the intermediate results differently in these cases, but I cannot understand WHY they are rounded differently? There is no sensible reason in these cases to do it differently. Mathworks, why you are doing this? It just makes all verification more difficult when the results are not what they are supposed to be.

채택된 답변

James Tursa
James Tursa 2021년 1월 21일
편집: James Tursa 2021년 1월 27일
This difference is this:
R0=A'*A;
In the above expression, you are using the exact same matrix A on both sides of the operation. The MATLAB parser can see this and thus call a symmetric BLAS matrix multiply routine, which fills in about 1/2 the matrix result and then MATLAB copies the results into the other half. This is faster than a generic matrix multiply and R0 is guaranteed to be exactly Hermitian/symmetric in this case.
R1=[a1 a2]'*[a1 a2];
In the above expression, you are building the two [a1 a2] matrices on the fly. The MATLAB parser is not smart enough to recognize that the two operands are in fact built the exact same way (it just sees two different operands with two different data pointers), so it calls a generic BLAS matrix multiply routine to get the R1 result. The operations are done in a different order than the symmetric routine, so in general you will get a slightly different result. And the result is not guaranteed to be exactly Hermitian/symmetric.
So here again, this is just the normal differences that would be expected when doing floating point arithmetic in a different order. Nothing else is going on, and MATLAB isn't trying to do anything to make your life more difficult. In fact the parser is trying to help you by giving you a way to speed up your calculations and guarantee an exact Hermitian/symmetric result. There are ways to force the calculation to use either the symmetric or generic routine in the background. My advice is simply to recognize what is going on and then write code accordingly. Assigning the operands to a single variable and then using that variable in a way that the parser will recognize the symmetry would probably be the best approach.
  댓글 수: 2
Petri M
Petri M 2021년 1월 21일
Thank you! I was only seeking for technical explanation and this was it. Is this really described in some public Matlab documentation? I was trying to search for answer for long time but could not find anything except some overall floating point stuff and that does not explain the difference without the knowledge of how the order is changed in these cases.
James Tursa
James Tursa 2021년 1월 21일
편집: James Tursa 2021년 1월 27일
I don't know if there is anything in the MATLAB documentation that discusses this in any detail ... i.e., how to set up and maintain symmetry in these types of calculations. It is also not documented how MATLAB detects symmetry ... by looking at the variable structure pointers or by looking at the variable data pointers? You can sometimes find snippets of how the parser works in certain situations, but the complete parser rules are not published and are continually changing from release to release. For this particular case, if you use the same exact variable for the operands you will make it easy for the parser to recognize symmetric cases.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

John D'Errico
John D'Errico 2021년 1월 21일
The Mathworks is NOT rounding numbers just to make your life difficult. They are not rounding numbers at all, at least not beyond that which happens in normal floating point arithmetic. And apparently, you are telling us that we cannot tell you the answer, since you already "know" what it is.
What you do not seem to appreciate is that MATLAB uses the blas to do linear algebra. This makes your computations more efficient. At the same time, the blas can change the sequence of the computations involved. And as soon as you play with the sequence of the adds in a computation, you incur the possibility that you will see a problem.
So I'm sorry, but what you "know" about matrix agebra does not always apply. Mathematical proofs do not apply down in the least significant bits of floating point numbers.
  댓글 수: 3
Petri M
Petri M 2021년 1월 21일
After James Tursa's answer I understood what you meant by blas. You should have used uppercase, BLAS. I had never heard of it before so I was reading blas from phone screen as 'bias'... So, yes, you were right, the difference comes from floating point arithmetic, but it is not adequate explanation without knowledge of how the order is changed in these cases.
John D'Errico
John D'Errico 2021년 1월 22일
편집: John D'Errico 2021년 1월 22일
So, I was not completely wrong. What I was wrong about was not capitalizing the letters in blas? Sorry for laughing. Actually, no. I'm still laughing. This is not a case of some ingenuous rounding.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Logical에 대해 자세히 알아보기

제품


릴리스

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by