For loop over function handle, how to speed up the code?

조회 수: 3 (최근 30일)
Sina
Sina 2022년 9월 13일
댓글: Sina 2022년 9월 14일
Hi all,
I have a high-dimensional operator in form of a function handle which I can see as a matrix. For some reason I want this matrix as if its columns are normalized, but because of some randomization occuring within the operator I cannot do the normalization inside the function. So my idea was to input canonical vectors where (vectors with zero everywhere and 1 at one of the entries) to the operator, retrieve each column of the matrix ( is the ith column of A), and store the norms so that I'd be able to use it later in my code, but this takes too much time because N (the dimension of the rowspace of the matrix) is very high (i.e 2^16). On the other hand I cannot give an identity matrix as the input to have the matrix all at once, because my matrix is not sparse and thus requires too much space (nearly 32 GB). Here's the code:
col_norms = zeros(N, 1); % column norms to be stored
z = zeros(N, 1);
z(1) = 1;
z = sparse(z);
col_norms(1) = norm(A(z)); % A is a function handle (the matrix)
for i = 1:N-1
z = circshift(z, i); % shifting the current vector to produce the next canonical vector
col_norms(i + 1) = norm(A(z));
end
How can I speed up this procedure? Is there any better way of finding the column norms of my operator
? Any help would be appreciated.
  댓글 수: 1
Jan
Jan 2022년 9월 13일
Use the profiler to find the bottleneck. Most likely it is not found in the posted code, but in the code of A(z).

댓글을 달려면 로그인하십시오.

답변 (1개)

Bruno Luong
Bruno Luong 2022년 9월 13일
편집: Bruno Luong 2022년 9월 13일
If your handle A can accept a matrix as input and the calculation of A(B) is not bottleneck, you can work by chunk
% N = 2^16;
chunk=128; % Adjust to your RAM available
col_norm = zeros(N,1);
ndone = 0;
while ndone < N
i = ndone+1:min(ndone+chunk,N);
j = i-ndone;
E = sparse(i,j,1,N,j(end));
col_norm(i) = sqrt(sum(A(E).^2,1));
ndone = i(end);
end
  댓글 수: 3
Bruno Luong
Bruno Luong 2022년 9월 14일
편집: Bruno Luong 2022년 9월 14일
You can easily "overload" it
% N = 2^16;
chunk=128; % Adjust to your RAM available
col_norm = zeros(N,1);
ndone = 0;
while ndone < N
i = ndone+1:min(ndone+chunk,N);
j = i-ndone;
E = sparse(i,j,1,N,j(end));
col_norm(i) = sqrt(sum(AMat(E).^2,1));
ndone = i(end);
end
function AX = AMat(X)
for k=size(X,2):-1:1
%AX(:,k) = full(A(X(:,k)));
AX(:,k) = A(X(:,k));
end
end
But I'am afraid the bottleneck is the matrix-vector handle, not the wraparound that is matter. So we can't help you.
Sina
Sina 2022년 9월 14일
Yes, you're right. It seems that I arrived at a deadend here. Maybe I edit the question in the next few days with more details of the function handle.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Logical에 대해 자세히 알아보기

제품


릴리스

R2017b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by