How to calculate trace(A' * B) using SVD vectors of A and B?

조회 수: 5 (최근 30일)
Xiaohan Du
Xiaohan Du 2018년 6월 26일
편집: Jan 2018년 6월 29일
Hi all,
Imagine I have 2 large matrices which have more rows than columns, I'd like to calculate trace(A' * B) for N times. I have 2 options: 1. calculate trace(A' * B) directly; 2. only calculate vector product of the diagonal, then sum it. I test with the following minimum example, it turns out the 2nd option is faster:
clear; clc;
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
N = 100;
%%no SVD.
% option 1.
tic
for i = 1:N
abTr = trace(A' * B);
end
toc
% option 2.
tic
abSum = 0;
for i = 1:N
for j = 1:nt
abSum = abSum + A(:, j)' * B(:, j);
end
end
toc
Elapsed time is 58.320345 seconds.
Elapsed time is 13.558039 seconds.
Now I need to apply truncated-SVD to A and B to optimise storage. the following code is applied to leave only 10 vectors
% apply svd
[ua, sa, va] = svd(A, 'econ');
[ub, sb, vb] = svd(B, 'econ');
% group in cells
nRem = 10;
ucell = {ua va ub vb};
scell = {sa sb};
% truncate
ucell = cellfun(@(v) v(:, 1:nRem), ucell, 'un', 0);
scell = cellfun(@(v) v(1:nRem, 1:nRem), scell, 'un', 0);
My question is: is it possible to calculate trace (it's an approximation due to truncation) using option 2 with these SVD vectors in ucell and singular values in scell? I can do it with option 1 using
trace((vb' * va * sa' * (ua' * ub) * sb)
But I'd like to be faster by only considering diagonal elements.
Many thanks!

채택된 답변

Anton Semechko
Anton Semechko 2018년 6월 26일
편집: Anton Semechko 2018년 6월 26일
I am not aware of any identities that can express trace(A'*B) in terms of SVDs of A and B, and I dont think one exists, but there is an identity that does not require explicit calculation of the A'*B matrix:
trace(A'*B) = sum(A(:).*B(:))
For example,
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
N = 100;
tic
tr1=trace(A'*B);
t1=toc;
%Elapsed time is 0.857053 seconds.
tic
t2=sum(A(:).*B(:));
toc
%Elapsed time is 0.210807 seconds.
As you can see, you obtain the result about 4 times faster when using the above identity. This identity is especially useful when nt is 'large', because the machine may not have insufficient memory to hold the nt-by-nt matrix A'*B.
  댓글 수: 10
Xiaohan Du
Xiaohan Du 2018년 6월 28일
trace(A' * B) = sum(diag(A' * B)), so actually diagonal elements are needed rather than all elements right?
My understanding is: in order to calculate diagonal elements of (A' * B), only paired column vectors of A and B are needed, i.e. if A = [ai]_i = 1^n, B = [bi]_i = 1^n, then only ai' * bi needs to be calculated.
However, this calculation of diagonal elements only cannot be achieved once A and B are decomposed by SVD, because recovering A' * B by SVD vectors recovers a full matrix which contains part of the entire diagonal line, rather than one or a few elements on the diagonal line.
Anton Semechko
Anton Semechko 2018년 6월 28일
I thought the problem was resolved because you found an efficient way to evaluate tr(A'*B) using SVDs of A and B.
Nevertheless, of course it is possible to recover ONLY the diagonal elements of A'*B from the SVDs of A and B while staying within memory constraints of the machine. However, I am not sure that computing these diagonal elements first and then summing them would help you improve evaluation of tr(A'*B).

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Jan
Jan 2018년 6월 26일
편집: Jan 2018년 6월 26일
I assume this is faster:
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
tic;
abTr = trace(A' * B);
toc % Elapsed time is 0.497233 seconds.
tic;
abSum = 0;
for j = 1:nt
abSum = abSum + A(:, j)' * B(:, j);
end
toc; % Elapsed time is 0.141059 seconds.
tic
abTr2 = sum(A(:).*B(:));
toc % Elapsed time is 0.182408 seconds.
tic;
abTr3 = A(:).' * B(:);
toc % Elapsed time is 0.054548 seconds.
tic;
[ua, sa, va] = svd(A, 'econ');
[ub, sb, vb] = svd(B, 'econ');
toc % Elapsed time is 10.629806 seconds.
% And this does not even include the further computations
% relative error:
abs(abTr - abTr3) / abTr
The reshaping by (:) and the transposition creates at least shared data copies, such that no additional memory is used. Maybe the JIT acceleration can call the dot product of the BLAS library directly, where the transposition can be provided as a flag.
Of course for a more accurate measurement some loops would be smarter, or better use timeit. But a factor of 3 ( abTr3 against abSum) is significant enough.
  댓글 수: 5
Xiaohan Du
Xiaohan Du 2018년 6월 29일
Hi Jan,
I don't understand why
A(:).' * B(:)
is faster than
for j = 1:nt
abSum = abSum + A(:, j)' * B(:, j);
end
theoretically, the second operation only has nt operations, each is a vector product between the ith vectors of A and B, while the first has to calculate the product between every 2 vectors in A and B, so shouldn't the first cost more than the second?
Jan
Jan 2018년 6월 29일
편집: Jan 2018년 6월 29일
A(:) creates a shared data copy: Only the header of the variable is changed, but the data are not copied. The same happens for the transposition. But Matlab's JIT acceleration might be so smart and avoid even the shared data copies by calling the underlying BLAS function DDOT directly. This function is implemented with multi-threading - you can see this in the task-manager, where all cores are loaded during the processing.
So A(:).' * B(:) starts as many threads as cores are available and no memory has to be copied around in the slow RAM.
abSum = abSum + A(:, j)' * B(:, j) calls DDOT also and starts multiple threads. The starting of a thread is very costly and here we need 4 starts for each column. I'm not sure if Matlab's JIT is smart enough to avoid the creation of the temporary vectors A(:,j) and B(:,j) in each iteration. But if this is performed explicitly, two variables have to be created and the elements of the columns are copied. Calling the library function DDOT costs some time for the overhead also.
Summary:
  • Case 1: DDOT is called one time and 4 threads are started.
  • Case 2: For nt=500, 1000 columns are copied, DDOT is called 500 times, 2000 threads are started (on a 4 core CPU).
So for me the speedup is expected.
the first has to calculate the product between every 2 vectors in A and B
This might be a misunderstanding: A(:) and B(:) are two vectors only, not "every two vectors".

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Linear Algebra에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by