MATLAB Answers

How does vectorized lsqcurvefit() calculate sum-of-square, row-wise or column-wise?

조회 수: 69(최근 30일)
Xingwang Yong
Xingwang Yong 14 Jan 2021
댓글: Xingwang Yong 15 Jan 2021
I have a 1001 measuments, in each measurement, there is a curve
y = a*exp(-b*x)
where x and y are vectors containing 8 elements, a and b are scalar parameters I want to fit. The 1001 measurements are independent and I want to get 1001 a and b.
Generally, I would do
% load measured data
% ...
% size(all_xdata) = 1001*8;
% size(all_ydata) = 1001*8;
fun1D = @(x, xdata)x(1)*exp(-x(2)*xdata) % y = a*exp(-b*x)
parfor k = 1:1001
lsqcurvefit(fun1D,x0,all_xdata(k, :), all_ydata(k, :))
end
I learned from here vectorized lsqcurvefit() may give me some speed-up.
However, I am a little confused about how to set the shape of xdata and ydata with vectorized lsqcurvefit(). The key problem here is that I do not know how lsqcurvefit() calculate the sum-of-square. From the documentation,
fun_ND = @... % vectorized version
sum-of-squares = sum((fun_ND(x,xdata)-ydata).^2)
If the size of ydata of 1001*8, what the size of sum-of-squares would be? 1001*1 (sum along each row)or 1*8(sum along each column). Apparently, for curve-fitting, we want it to be 1001*1, i.e. the sum-of-squares should be calculated within each measurement.
For a matrix, sum() would sum along column if there is not a second input provided. I am not sure if sum((fun(x,xdata)-ydata).^2) in the documentaion is pseudo-code or not.

  댓글 수: 1

Xingwang Yong
Xingwang Yong 14 Jan 2021
I find out myself, one can assume that sum() in lsqcurvefit() would add along row, e.g.
sum-of-squares = sum((fun_ND(x,xdata)-ydata).^2)
if ydata is m-by-n, then sum-of-squares should be m-by-1.
Here is the code, althought it can not prove that sum() add along row directly.
clear
numMeasure = 101;
numPoints = 8;
a = rand(numMeasure,1);
b = rand(numMeasure, 1);
all_xdata = ones(numMeasure, 1) * (1:numPoints);
all_ydata = a*ones(1, numPoints).*exp(-b*ones(1, numPoints).*all_xdata);
fun_ND = @(x, xdata)x(:,1)*ones(1, size(xdata, 2)).*exp(-x(:,2)*ones(1, size(xdata, 2)).*xdata);
x0 = [rand(numMeasure,1),rand(numMeasure, 1)];
abfit = lsqcurvefit(fun_ND,x0,all_xdata,all_ydata);
numDisplay = 5;
yfit = fun_ND(abfit(1:numDisplay,:), all_xdata(1:numDisplay,:));
figure;
plot(all_xdata(1:numDisplay,:)', all_ydata(1:numDisplay,:)', 'ok', ...
all_xdata(1:numDisplay,:)', yfit', '-');
Maybe you treat sum() add along column and write another version of fun_ND(), it would also work.

댓글을 달려면 로그인하십시오.

채택된 답변

Matt J
Matt J 14 Jan 2021
편집: Matt J 14 Jan 2021
if ydata is m-by-n, then sum-of-squares should be m-by-1.
No.
The objective function minimized by lsqcurvefit is always just,
sum-of-squares = sum((fun_ND(x,xdata)-ydata).^2 ,'all')
In other words, the shape of your data matrices is irrelevant to the calculation of the objective function. However, in your case, because each row of your matrices happens to have its own independent set of parameters (a,b), the minimization is separable, i.e., the optimum (a,b) only depends on the sum-of-square terms from its corresponding row. But this has nothing to do with lsqcurvefit. It is just a feature of the parametric structure of your specific problem. If you had organized your data and parametric model column-wise instead of row-wise, however, you would get the exact same result.
Another important thing to understand with lsqcurvefit is that it does not work with your xdata, ydata, or fun_ND output in 1001x8 matrix form. It always does a preliminary reshape of all those matrices into 8008x1 column vectors (see also here). Similarly, when you supply your own sparse Jacobian calculation (which you must do to get the advantages of this technique) lsqcurvefit will expect to receive an 8008x2002 matrix.
As one consequence of this, I think it will be better for you to organize your xdata and ydata as 8x1001 matrices instead of 1001x8 (and change fun_ND accordingly). That way, your Jacobians will have a simple, sparse, block diagonal form, where each of the diagonal blocks is 8x2.

  댓글 수: 5

표시 이전 댓글 수: 2
Xingwang Yong
Xingwang Yong 15 Jan 2021
Thanks, Matt, you are right. The difference is small enough.
Another question, do I have to supply my own sparse Jacobian calculation? You said above "you must do...". I looked through the post you recomended, but I did not find the exact sentence saying "I must supply...".
In that post, you mentioned there are two ways to accelerate, one is using vectorized objective function, another is supplying my own Jacobian calculation. Do these two ways have to be used simultaneously?
Since sometimes the Jacobian would be difficult to calculate, e.g. my objective function is based on simulation instead of simple exponential. Furthermore, the Jacobian may be non-sparse.
Can I get some speed-up if I only use vectorizattion and not supply my own Jacobian?
Matt J
Matt J 15 Jan 2021
If you don't supply your own Jacobian in the ND formulation, lsqcurvefit will run its own for-loop over all 2002 parameters in an effort to compute the Jacobian via finite differences. Thus, you will be incurring the same looping effort as in the 1D formulation, except that here, each function evaluation will be for 8008 measurements instead of just 8 (i.e., more costly).
If you really cannot compute your own Jacobian, you can try using the JacobPattern option to make lsqcurvefit's finite differencing aware of the block sparsity of your Jacobian, but I am not certain how much benefit this will give you over the 1D formulation.
Xingwang Yong
Xingwang Yong 15 Jan 2021
I'll try the JacobPattern option. Thank you very much, Matt!

댓글을 달려면 로그인하십시오.

추가 답변(0개)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by