How to avoid error accumulating when doing vector transformation?

조회 수: 113 (최근 30일)
serige
serige 2024년 11월 5일 7:56
답변: Shashi Kiran 2024년 11월 7일 9:02
I am working on an eigenvalue problem. For simplicity, Let's say I have a matrix M which has an eigenvalue 1, and the associated eigenvector v (assuming the eigenspace is 1 dimensional), so
Now, if I want , so I have
So if I have a vector that's is really close to v, I would expect the above expression to be very close to 1 as well, but this is not the case. Here I have a (row) stochastic matrix of size 1000 by 1000 and v is a good approximation of the stationary vector which I just computed
>> norm(v*M1000-v, 1)
ans =
1.0300e-17
>> p = (v - 1) ./ v;
>> norm(v*(M1000 - diag(p)) - ones(1,1000),1)
ans =
4.0290e-13
I am trying to understand what makes the errors so different from each other. This is worse if the matrix size is bigger. Is there a better way I can compute the expression to make the errors closer?
  댓글 수: 2
Shashi Kiran
Shashi Kiran 2024년 11월 5일 9:33
As specified, M1000 is a 1000x1000 matrix and v is a 1000x1 eigenvector. How is the multiplication v*M1000 carried out, or am I misunderstanding something?
Could you provide M1000 and v so I can examine the issue further?
serige
serige 2024년 11월 6일 5:19
편집: serige 2024년 11월 6일 6:46
v is actually a 1x1000 vector, I do apologize for the confusion.
Here is what I used to create M1000
n = 1000;
M1000 = randn(n,n);
M1000 = abs(M1000);
M1000 = bsxfun(@rdivide, M1000, sum(M1000, 2));
For big matrix, I use a simple iterative power method to compute v. Since this is a stochatic matrix, v in this case is a probability vector. For a 1000x1000 matrix, you can probably just use direct method to compute v
v = null(eye(size(M1000)) - M1000', 'r');
v = v';
v = v ./ sum(v);

댓글을 달려면 로그인하십시오.

채택된 답변

Shashi Kiran
Shashi Kiran 2024년 11월 7일 9:02
The difference in error between the two norm calculations arises from the nature of the operations being performed.
This is beacuse
  • The first norm calculation, norm(v * M1000 - v, 1), directly checks if v is an eigenvector, which is generally more stable and less prone to numerical errors.
  • The second calculation, norm(v * (M1000 - diag(p)) - ones(1, 1000), 1), involves subtracting a diagonal matrix from M1000. This transformation is more sensitive to floating-point precision, so it introduces small numerical errors that accumulate differently than in the first calculation.
To reduce these errors, you can use MATLAB’s vpa (Variable-Precision Arithmetic) function, which allows calculations with higher precision. For more details, refer to the https://www.mathworks.com/help/symbolic/sym.vpa.html. Using higher precision (like 32 or more decimal digits) can significantly reduce floating-point error, especially for larger matrices. This approach, however, can be computationally expensive.
Here is how you can implement this:
% Initialize data
n = 1000;
M1000 = randn(n, n);
M1000 = abs(M1000);
M1000 = bsxfun(@rdivide, M1000, sum(M1000, 2));
v = null(eye(size(M1000)) - M1000', 'r');
v = v';
v = v ./ sum(v);
% Convert M1000 and v to symbolic with higher precision
M1000_sym = vpa(M1000, 32); % Set precision to 32 digits
v_sym = vpa(v, 32);
p_sym = (v_sym - 1) ./ v_sym;
% Compute norms in higher precision
norm1_sym = norm(v_sym * M1000_sym - v_sym, 1);
fprintf('norm(v_sym * M1000_sym - v_sym, 1) = %.4e\n', double(norm1_sym));
norm(v_sym * M1000_sym - v_sym, 1) = 5.6037e-16
expression_sym = v_sym * (M1000_sym - diag(p_sym)) - ones(1, n);
norm2_sym = norm(expression_sym, 1);
fprintf('norm(v_sym * (M1000_sym - diag(p_sym)) - ones(1, n), 1) = %.4e\n', double(norm2_sym));
norm(v_sym * (M1000_sym - diag(p_sym)) - ones(1, n), 1) = 5.6037e-16
Hope this helps!

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Linear Algebra에 대해 자세히 알아보기

제품


릴리스

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by