필터 지우기
필터 지우기

lsqnonlin and Jacobian misunderstanding: what is the Jacobian definition ?

조회 수: 10 (최근 30일)
Hello !
I use the lsqnonlin Matlab function to fit a curve, called f, to my experimental points (coordinates x_i and y_i). Thus, we have to make simple :
[optimum_result,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin( y_i - f(a,x_i) ) where a is my fit parameter.
I'm wondering what is the definition of the jacobian returned by Matlab :
- the square of the Jacobian returned by lsqnonlin = the second derivative of the residual squared (calculated at the optimum, means the best fit parameter found). Here my residual is : y_i - f(a,x_i). it is the definition found here http://www.ligo-wa.caltech.edu/~ehirose/work/andri_matlab_tools/fitting/MatlabJacobianDef.pdf
OR
- the Jacobian returned by lsqnonlin = the derivative of the residual (calculated at the optimum). It is why I have understand reading Matlab help.
If the answer is the derivative of the residual (calculated at the optimum), I have a misunderstanding. In fact, at the optimum, the sum of my residual vector squared have to be minimum. So the sum of my jacobian, a derivative, has to be equal to (or close to) zero. yes or not ? In Matlab it is not equal to zero, it is why I have a misunderstanding.
Thanks.

채택된 답변

Alan Weiss
Alan Weiss 2013년 8월 5일
You have a slight misunderstanding of what a Jacobian is for a sum-of-squares problem. The definition is here in the documentation.
In detail, the objective function is
sum((F(x,xdata) - ydata).^2)
The Jacobian is
J(i,j) = partial(F(x,xdata)(i))/partial(x)(j)
There is no reason to think that J is near zero at a solution. The gradient of the objective function is something like
2J'*F
and if F is near zero then the gradient of the objective is near zero, but J is not necessarily small.
Alan Weiss
MATLAB mathematical toolbox documentation

추가 답변 (2개)

Joffray Guillory
Joffray Guillory 2013년 8월 5일
I use lsqnonlin function (not lsqcurvefit). So, I have something like that (a very simple example with only one fit parameter) :
xdata = [ ... ];
ydata = [ ... ];
g = @(x) x*sin(xdata) - ydata ;
[x,fval,residual,exitflag,output,lambda_fit,jacobian_fit] = lsqnonlin(g,x0,[],[],options);
According to you, my objective function is (I agree):
sum(( x*sin(xdata) - ydata ).^2)
but my Jacobian is:
J(i) = partial( x*sin(xdata) - ydata )/partial(x)
and not :
J(i) = partial( x*sin(xdata) )/partial(x)
because I have for input argument in my lsqnonlin function: x*sin(xdata) - ydata. In other words, the Jacobian returned by lsqnonlin is the derivative of the residual (and calculated at the optimum x0). Is it right ?
  댓글 수: 1
Alan Weiss
Alan Weiss 2013년 8월 5일
It is immaterial whether or not we subtract ydata. For your example,
J(i) = sin(xdata(i))
whether or not ydata is included. In this example, J is a vector of length(xdata) components.
I hope this clarifies the computation.
Alan Weiss
MATLAB mathematical toolbox documentation

댓글을 달려면 로그인하십시오.


Joffray Guillory
Joffray Guillory 2013년 8월 6일
Thanks for these answers. Now, it is clear for me.
Joffray

카테고리

Help CenterFile Exchange에서 Polynomials에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by