Info
이 질문은 마감되었습니다. 편집하거나 답변을 올리려면 질문을 다시 여십시오.
My nested For Loops takes 40 seconds for 20,000 records. Anyway to vectorize or improve?
조회 수: 1 (최근 30일)
이전 댓글 표시
% Consolidate Results by Security and Date
uniqueDate = unique(d.RiskData(:,2));
uniqueSec = unique(d.RiskData(:,1));
numSecs = length(unique(d.RiskData(:,1)));
numDates = length(unique(d.RiskData(:,2)));
d.RiskDataNew{(numDates * numSecs) + 1,size(d.RiskData,2)} =[];
d.RiskDataNew(1,1) = {'SECURITY'};
d.RiskDataNew(1,2) = {'DATE'};
for i = 1:numFields
d.RiskDataNew(1,i + 2) = fldList(i);
end
%d.RiskDataNew = {};
tic
n = 1;
for x = 1:numSecs
for y = 1:numDates
idx = find(contains(d.RiskData(1:end,1),uniqueSec(x)) & ...
contains(d.RiskData(1:end,2),uniqueDate(y)));
if ~isempty(idx)
n = n + 1;
d.RiskDataNew(n,:)=d.RiskData(idx(1),:);
if length(idx) > 1
numFields = size(d.RiskData,2);
for j = 2:length(idx)
for k = 3:numFields % first two fields are defaults: Security and Date
if (isempty(d.RiskDataNew{n,k}))
d.RiskDataNew{n,k} = d.RiskData{idx(j),k};
end
end
end
end
end
end
end
toc
댓글 수: 2
dpb
2020년 6월 15일
find is superfluous here and somewhat costly...
Hard to decipher what is going on -- how about an explanation of what your'e trying to do and a sample of the raw data to work from?
Is it mandatory to use struct? They're expensive relative to just plain data arrays.
Walter Roberson
2020년 6월 16일
I have not tested, but I have a suspicion that ismember is faster than contains().
contains(d.RiskData(1:end,1),uniqueSec(x))
That sub-expression can be calculated at the for x level. You can possibly even do
for x = 1:numSecs
xidx = find(contains(d.RiskData(1:end,1),uniqueSec(x)));
for y = xidx
idx = find(contains(d.RiskData(1:end,2),uniqueDate(y)));
답변 (2개)
Sayyed Ahmad
2020년 6월 16일
You have to avoiding the nested loops.
may be you can use the bsxfun to avoiding some loops.
An example:
A = rand(50); % 50-by-50 matrix of random values between 0 and 1
% method 1: slow and lots of lines of code
tic
meanA = mean(A); % mean of every matrix column: a row vector
% pre-allocate result for speed, remove this for even worse performance
result = zeros(size(A));
for j = 1:size(A,1)
result(j,:) = A(j,:) - meanA;
end
toc
clear result % make sure method 2 creates its own result
% method 2: fast and only one line of code
tic
result = bsxfun(@minus,A,mean(A));
toc
the Answer wold be
Elapsed time is 0.015153 seconds.
Elapsed time is 0.007884 seconds.
see the following links for more details.
댓글 수: 0
Mark McGrath
2020년 6월 16일
편집: Mark McGrath
2020년 6월 16일
댓글 수: 3
dpb
2020년 6월 17일
Are all the "fields", numeric? Then just a cell array would hold them -- the problem will still be processing a different number of elements per cell will kill about any vector operations as will a variable number of struct fields or the like.
The most straightforward way altho a little more memory-costly would be to either define a maximum N and preallocate or determine the max number in the dataset and allocate that size of array using NaN or other way to indicate the missing values.
Or, you could then use a table or timetable with the necessary number of variable columns -- code could then be written for that scenario as well that would be generic based on the table size/number variables/columns.
이 질문은 마감되었습니다.
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!