File Exchange

image thumbnail

LMFnlsq - Solution of nonlinear least squares

version (859 KB) by Miroslav Balda
Efficient and stable Levenberg-Marquard-Fletcher method for solving of nonlinear equations


Updated 25 Feb 2012

View Version History

View License

The function The LMFnlsq.m serves for finding optimal solution of an overdetermined system of nonlinear equations in the least-squares sense. The standard Levenberg- Marquardt algorithm was modified by Fletcher and coded in FORTRAN many years ago (see the Reference). This version of LMFnlsq is its complete MATLAB implementation complemented by setting parameters of iterations as options. This part of the code has been strongly influenced by Duane Hanselman's function mmfsolve.m.

Calling of the function is rather simple and is one of the following:
LMFnlsq % for help output
Options = LMFnlsq('default');
Options = LMFnlsq(Name1,Value1,Name2,Value2,...);
x = LMFnlsq(Eqns,X0);
x = LMFnlsq(Eqns,X0,'Name',Value,...);
x = LMFnlsq(Eqns,X0,Options);
[x,ssq] = LMFnlsq(Eqns,...);
[x,ssq,cnt] = LMFnlsq(Eqns,...);
[x,ssq,cnt,nfJ] = LMFnlsq(Eqns,...);
[x,ssq,cnt,nfJ,XY] = LMFnlsq(Eqns,...);

In all cases, the applied variables have the following meaning:
% Eqns is a function name or a handle defining a set of equations,
% X0 is a vector of initial estimates of solutions,
% x is the least-squares solution,
% ssq is sum of squares of equation residuals,
% cnt is a number of iterations,
% nfJ is a sum of calls of Eqns and function for Jacobian matrix,
% xy is a matrix of iteration results for 2D problem [x(1), x(2)].
% Options is a list of Name-Value pairs, which may be set by the calls
Options = LMFnlsq; % for default values,
Options = LMFnlsq('Name',Value,...); % for users' chosen parameters,
Options = LMFnlsq(Options,'Name',Value,...); % for updating Options.
If no Options is defined, default values of options are used.

Field names 'Name' of the structure Options are:
% 'Display' for control of iteration results,
% 'MaxIter' for setting maximum number of iterations,
% 'ScaleD' for defining diagonal matrix of scales,
% 'FunTol' for tolerance of final function values,
% 'XTol' scalar or vector for tolerance of final solution increments,
% 'Trace' for control of iteration saving,
% 'Lambda' for setting of initial value of the parameter lambda.
% 'Jacobian' for a handle of function, which evaluates Jacobian matrix.
If no handle is declared, internal function for finite difference approximation of the matrix is used.

Example 1:
The general Rosenbrock's function has the form
f(x) = 100(x(2)-x(1)^2)^2 + (1-x(1))^2
Optimum solution gives f(x)=0 for x(1)=x(2)=1. Function f(x) can be expressed in the form
f(x) = f1(x)^2 + f2(x)^2, where f1(x) = 10(x(2)-x(1)^2), and f2(x) = 1-x(1).
Values of the functions f1(x) and f2(x) can be used as residuals. The parameter Eqns has a form of either named function:
% function r = rosen(x)
%% ROSEN Residuals of the Rosenbrock valey:
% r = [ 10*(x(2)-x(1)^2) % first part, f1(x)
% 1-x(1) % second part, f2(x)
% ];
or a handle of the anonymous function:
% rosen = @(x) [10*(x(2)-x(1)^2); 1-x(1)];
The calls are different:
[x,...] = LMFnlsq('rosen',...); % in case of named function, or
[x,...] = LMFnlsq(rosen,...); % in case of the function handle.
LMFnlsq finds the exact solution of this problem in 17 iterations.

Example 2:
Regression of experimental data.
Let us have experimental data simulated by the decaying function
y = c(1) + c(2)*exp(c(3)*x) + w*randn(size(x));
for column vector x. An initial guess of unknown parameters is obtained from approximation y(x) for x=0 and x->Inf as
c1 = y(end);
c2 = y(1)-c(1); and
The anonymous function is defined for predefined column vector x as
res = @(c) c(1)+c(2)*exp(c(3)*x) - y;
and the call, say
[x,ssq,cnt] = LMFnlsq(res,[c1;c2;c3],'Display',1);
Provided w=0 (without errors), x=(0:.1:2)' with c=[1,2,-1], the initial estimates of unknown coefficients are
c0 = [1.2707, 1.7293, -1.6226].
The call
[c,ssq,cnt] = LMFnlsq(res,[c1,c2,c3])
gives the exact solution c=[1, 2, -1] in 9 iterations.

* Users having old MATLAB versions earlier than 7, which has no anonymous functions implemented, have to call LMFnlsq with named functions for evaluation of residuals.
* See LMFnlsqtest for a short explanation and solved examples.
* If values of unknowns are different in orders, it is recommended to use variable scaling (see help),
* The script BoxBOD.m from NIST testing problems is complemented with the solution.

Fletcher, R., (1971): A Modified Marquardt Subroutine for Nonlinear Least Squares. Rpt. AERE-R 6799, Harwell

Cite As

Miroslav Balda (2021). LMFnlsq - Solution of nonlinear least squares (, MATLAB Central File Exchange. Retrieved .

Comments and Ratings (73)

This is a great contribution!!
Thank you very much for this material!

jun xu

Dear Prof. Balda
I'm sorry to disturb you. I am Xu Jun,come from south China normal university, China.Thank you very much for your LMFnlsq. This brings great convenience to my problem of solving complex equations.But recently, I encountered a lot of confusion in solving nonlinear complex equations. My ssq is quite big and the cnt only have one. I look forward to receiving your help. Can you help modify my code?
clc;clear;close all;
syms w k;
for k=6e5:1e4:12e5



kz1=@(w) (epsilon1.*w.^2./c^2-k.^2).^(1./2);
kz2=@(w) (epsilon2(w).*w.^2./c^2-k.^2).^(1./2);
kz3=@(w) (epsilon3.*w.^2./c^2-k.^2).^(1./2);

y =@(w) 1+epsilon1.*kz3(w)./(epsilon3.*kz1(w))-1i.*tan(kz2(w).*d).*(epsilon2(w).*kz3(w)/(epsilon3.*kz2(w))+(epsilon1.*kz2(w)./(epsilon2(w).*kz1(w))));
m= @(w) complex(w(1),w(2));
% fx=@(w) [real(y(m(w))); imag(y(m(w)))];
fx=@(x) abs(y(m(x)));

[w,ssq,cnt] = LMFnlsq(fx,[w_p,gamma])
hold on;
Best wishes to you!

Miroslav Balda

@tri le
It is always good to read answers made by the author in past (see the answer dated 15 Jun 2015). If you expect any troubles, you may send me experimental data, and I could try to solve your task.

tri le

Dear Professor
i'm trying to use your Toolbox to find parameters B,l,m for this function y=B.t^l.z^m, y,t,z are data i got from experiments. Your toolbox work well but the results i got were like the initial guess. if i use [10^-14 -1 1] for initial guess i got the result were the same. could you please tell me why and how to fix it? Best regards

Yan Li

Dear all,
When I ran it, it displayed
'Download function(s) inp (Id=9033) fig (Id=9035) separator (Id=11725 )from File Exchange'
Where can I get these?
Thanks !!


ali reza

I found the solution
Thank you for sharing your toolbox and your experince :)

ali reza

Dear Professor
I ave a nonlinear equation but all the coefficients are known. so I can not define the residuals.
I want to estimate the coefficients from the dataset.
How should I use your Toolbox to solve this problem?


ali reza



Thomas Heitz

Works great for me, and very well documented.

Roshanak Heidari

Dear professor I read the document carefully and got my previous answer.My question is,for my case, can we limit the possible answer as constraint before the function solve the problem?I meant put a condition when it is solving the problem as x y z could not be more than the room's dimension?

Roshanak Heidari

Is it possible to put the constraint of room dimension for the initial answer?We solve the localization problem diversely but we could not get the same answer with this program..

Thanks again

Roshanak Heidari

dear professor,

I am trying to solve a localization problem with this program,we have 4 receivers with known x y z and one mobile with un known location.I can get the answer bot its not accurate and depends on initial step as there a way that we can determine the accuracy of the answer for this algorithm sth like variance for increasing the iteration and getting more accurate answer? Thanks


Miroslav Balda

If you read carefully the section Description, you would find in second paragraph, that "X0 is a vector of initial estimates of solutions".


Prajan Pradhan

What should be the value of x0 for this function? Every time I change x0 the output is something different nowhere near to the actual output?

Miroslav Balda

@Hing Jiang,
Your problem belongs to those that may lead to unstable solution. The reason is rather simple: The order differences in sought unknowns cause that the numerically obtained estimates of Jacobian matrix are very inaccurate. The situation may be solved by a "normalization" of initial guess of unknowns. If your guess of unknowns is the column vector x0, you do not use it immediately in the call of LMFnlsq, but later in the function for evaluating residuals. The call of LMFnlsq could have the form, say,

[x,ssq,cnt] = LMFnlsaq('resid',ones(size(x0)));

It is obvious that all the variables defined by ones(size(x0)) have the same magnitude for evaluatig elements of the Jacobian matrix.However, the intentional "fault" should be repaired just at the beginning of the function 'resid' by a statement x = x.*x0; that should be used also just after The function LMFnlsq returns a solution. The reason for such operations is simple: While the function LMFnlsq calculates with normalized variables, the array product x.*x0 transfers the solution into the real scales.

Hong Jiang

Dear Prof. Balda,
Thank you so much for providing this package. I would like to use this package to optimize camera projection matrix, in which the values of unknowns are different in orders. I call the function in this form: [x,ssq,cnt] = LMFnlsq(res, x0, 'MaxIter', M, 'XTol', Tol), however, the results are unstable if run N (N>1) times. I read the 'note' provided in this package, and it says "It is recommended to modify slightly the call of LMFnlsq for improvement of reliability of results and stability of the iteration process, namely if range of values of x's be large". Yet I still do not figure out how to modify the call of LMFnlsq. Could you please help me? Any advice is appreciated. Thank you so much.

Zeng Zhen

Aslak Grinsted

Thank you for this. I use it in for optimising camera parameters and it works great. I have also used it in another project where it also delivered a massive performance boost.

Aslak Grinsted

Miroslav Balda

Sorry, I repair a wrong copying.
The right solutions are
x =
ssq =
cnt =
x =
ssq =
cnt =
x =
ssq =
cnt =


Miroslav Balda

Dear Amin
I thank you for your interest in my function LMFnlsq. The difference in solutions between LMFnlsq and fminsearch is caused by default preset tolerances 'XTol' and FunTol'.By changing them, you would obtain just the same result. It is obvious from the first group of results from the script Amin.m with improved required tolerances:
% Amin.m
%%%%%%%%% 2014-01-03
clear all
close all
format compact

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5],'XTol',1e-10,'FunTol',1e-10)


% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
fx=@(x) [real(y(c(x))); imag(y(c(x)))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])


% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
c = @(x) complex(x(1),x(2))^2;
fx = @(x) [real(c(x))+1; imag(c(x))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

Solutions of all cases are:

% Amin.m
%%%%%%%%% 2014-01-03
clear all
close all
format compact

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5],'XTol',1e-10,'FunTol',1e-10)


% solution of 2 equations: Re f + 1; Im c^2 = 0;
fx=@(x) [real(y(c(x))); imag(y(c(x)))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])


% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
c = @(x) complex(x(1),x(2))^2;
fx = @(x) [real(c(x))+1; imag(c(x))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

The first solution corresponds the example solved by fminsearch, however, with smaler tolerances. The solution is the same.

If you wanted to solve the >>complex<< problem f(x) = x^2 + 1, then the anonymous functionsare different, and are given in example 2 and 3. It is clear that the almost exact solution is reached in only 4 steps (cnt).

All the best in the New Year!


Dear Prof. Balda
Happy New Year
I have a question form the LMFnlsq:
Currently I am working on a problem that I should numerically solve a nonlinear equation, that the result will be complex numbers. I googles it and then there are two solution I have found;
One of them is with your code and the other one is with the fminsearch:
I checked the solutions for the simple problem:

clear all
close all

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

The results of the above equation are:

x =


ssq =


cnt =


sol =


fval =


I wondering that it is possible to use your code in this case and any special option should be chosen for that?
Thank you very much
Best Regards

Jing Qian


Dear Prof. Balda,
I apologise for jumping to conclusions. You are correct and I have a test here that shows how LMFnlsq gives the same results as nlinfit and lsqnonlin.

hu1 = @(b,x) (b(1)*x(:,2) - x(:,3)/b(5))./(1 + b(2)*x(:,1) + b(3)*x(:,2) + b(4)*x(:,3));
x1=[470 285 470 470 470 100 100 470 100 100 100 285 285]';
x2=[300 80 300 80 80 190 80 190 300 300 80 300 190]';
x3=[10 10 120 120 10 10 65 65 54 120 120 10 120]';
y=[8.55 3.79 4.82 0.02 2.75 14.39 2.54 4.35 13.00 8.50 0.05 11.32 3.13]';
x=[x1 x2 x3];
beta=[1 0.05 0.02 0.1 2];
% 1.2526 0.0628 0.0400 0.1124 1.1914

fun = @(b) (b(1)*x2 - x3/b(5))./(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y;
[C,ssq,cnt] = LMFnlsq(fun, beta, 'Display',[-10,0], 'FunTol',1e-10);
% 1.2526 0.0628 0.0400 0.1124 1.1914

lsqnonlin(fun, beta)
% 1.2526 0.0628 0.0400 0.1124 1.1914

Miroslav Balda

There are no bugs in LMFnlsq and LMFnlsq2 that you have anounced. There is an error in the data, because all vectors x1, x2, x3, and y should be COLUMN vectors. More to it, the division operator in fun should be an array operator './'. The repaired code follows:

% Repaired data: Column vectors!
x1 = [470, 285, 470, 470, 470, 100, 100, 470, 100, 100, 100, 285, 285]';
x2 = [300, 80, 300, 80, 80, 190, 80, 190, 300, 300, 80, 300, 190]';
x3 = [ 10, 10, 120, 120, 10, 10, 65, 65, 54, 120, 120, 10, 120]';
y = [8.55, 3.79, 4.82, 0.02, 2.75, 14.39, 2.54, 4.35, 13.00, 8.50, 0.05, 11.32, 3.13]';
b0 = rand(5,1);
fun = @(b) (b(1)*x2 - x3/b(5))./(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y; % ./
[C,ssq,cnt] = LMFnlsq2(fun, b0, 'Display',[-10,0], 'FunTol',1e-10);


Dear Prof. Balda,
There is a bug on both LMFnlsq and LMFnlsq2, as shown in the code:

x1 = [470, 285, 470, 470, 470, 100, 100, 470, 100, 100, 100, 285, 285];
x2 = [300, 80, 300, 80, 80, 190, 80, 190, 300, 300, 80, 300, 190];
x3 = [ 10, 10, 120, 120, 10, 10, 65, 65, 54, 120, 120, 10, 120];
y = [8.55, 3.79, 4.82, 0.02, 2.75, 14.39, 2.54, 4.35, 13.00, 8.50, 0.05, 11.32, 3.13];
fun = @(b) ( b(1)*x2 - x3/b(5) )/(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y;
[C,ssq,cnt] = LMFnlsq(fun, ones(5,1))

Error using *
Inner matrix dimensions must agree.

Error in LMFnlsq>getAv (line 434)
v = J'*r;

Error in LMFnlsq (line 279)
[A,v] = getAv(FUN,JAC,x,r,epsx);

Emmanuel Farhi

That's a good contribution for LM fitting. I use it without constraints/restraints.

Miroslav Balda

Yes, your residuals are in order, but the brackets. However, we apply penalty functions in case, when we want to constrain a solution. Your proposal does not influence the solution at all, because all iterations remain in the square of the side = 4. You may try what happens, if you introduce bounds +-0.5 or +-0.25. Try it. Or look at the example from LMFnlsqtest file, where is more complex (circular) area, where the solution should be.

Weights are active only for current iteration outside bounds. It means, that it is not necessary to try to find "optimal" weights, because no optimal weight exists. The solution should be insensitive to the weight value. Therefore, only residuals corresponding penalty functions have to generate bigger residuals than other equations to keep iterations within bounds, where they are zero.. It may be very fast to guess that w=(10 up to 1000) could be appropriate. The final solution may not depend on the right value of the weight.


Dear Prof. Balda,
Thank you so much, I understand it better now!
For the bounded Rosenbrock problem:
-2 <= x[1] <= 2
-2 <= x[2] <= 2
We need 4 penalty functions:
And we try several values for w2, for example 0.1, 0.2, 0.4, ..., 1.9 to get the "best" value for w2.
Is this correct?

Miroslav Balda

@Jose 3.4.2013
No, it can not operate due to three reasons:
1. Formally, the chosen interval does not reduce the domain of feasible solution, that is within the interval. However, the most important reasons are:
2. your condition
(x(2)>2)*(x(2)<-2)*x(2)*w2 is a combination of two conditions. Thus you have to set up two residuals:
Both residuals are zero for false condition, however they are linearly dependent on the distance from the limit value due to the second term (before the weight w2).
3. your proposed condition is identically equal zero, because the product of two logical terms equals zero for any x.

The weight w2 should be chosen carefully in such a way that it gives the solution just on the limit in case when the minimum lays outside the interval. It should be not to small and simultaneously not too large.


Dear Prof. Balda,
Thank you for your reply.
So, for the Rosenbrock problem with bounds:
-2 <= x <= 2
The penalty function is:
(x(2) > 2)*(x(2) < -2)*x(2)*w2
What is the best way to find experimentally w2?

Miroslav Balda

I am preparing the function npeaks.m that decomposes a function into a sum of Gaussian functions with the use of LMFnlsq function. It will occur in a short time.

Bounds can be introduced by additional residuals in the form of penalty functions.
The residual (x(4)<0)*x(4)*w4
will keep x(4) nonnegative, if the user-defined weight w4 be suitable positive real value. It should be found experimentally.


Dear Prof. Balda,
Excellent results, thank you for sharing!

How can we add parameter bounds?


My data looks a bit like a Gaussian (Normal) curve. I fitted it using 1, 2, 3 and 4 gaussians added together where the sigmas, centre positions and heights were variable. It worked well for the 2 and 4 gaussians. I noticed for function with odd number of gaussians added together as 'res', the final fit (based on the x values outputted after cnt # of iterations) height doesn't match the data. i.e if you just make a simple Gaussian function as the data you want to fit and have your guess function as a Gaussian with the same parameters you created the data with, the final x residual/variables will not be the same. The height will be off by a noticeable amount. This only happened with odd numbers though. For even number of gaussians added together in the 'guess' fit, it worked perfectly.


@prof misrolav balda
here is my question in mathlab answer
check it there please


@ prof Misrolav Balda
actually my data consist from so many wavelength but this i give you from two wavelength
for 690nm
F = [0.779543143 0.624771688 0.387776261 0.266986051 0.174732672 0.130313689 0.110540421 0.051412959]

a is in range = 0.09 - 0.8
b is in range = 7 - 15

for c = [0.8 0.9 1 1.1 1.2 1.3 1.4 1.6]

i always get the imaginary solution with your code, please help me


I figured it out! it took a while, i just split the function in half and kept the rest of the form the same res=@(y)eqn(1)(x<b)+eqn(2)(b<=x). took a few tries to get the syntax right. Thanks for the amazing algorithm!!!

Miroslav Balda

Your problem is solvable after introducing new variable d=a+b and constraints on the variables to be real. If you send me your data, I'll try to solve it.


im trying to contcat you by email because i need to solve my case so i ended up here

i have function with 2 parameter, like this
F = (0.6/(a + b))*(sqrt(3*a*(a + b))+(1/c))*(exp(-sqrt(3*a*(a + b ))*c))/(c^2)

F is known reflectance data
a and b is parameter
c is known vector distance

can i use LMFnlsq to solve that with lavenberg marquardt algorithm? im tring to use that code but the solution give me imaginary a and b


I'm fitting with a discontinuous function. My data resembles a normal curve, but is wider on one side so I hope to fit it with 2 halves of a normal curve with different sigmas (spreads) on either side of the centre point. How do I input this into the res function. The residuals are the height, the two sigmas and the centrepoint. Furthermore, will the fact that the centre is what I'm trying to make more accurate a big problem if I'm using it to identify which side of the function (which sigma) I'm on?

Miroslav Balda

The algorithm LMFnlsq is able to find the best approximation of a set of points by a curve supplied by the user. The curve is described by a formula with a number of (at tne beginning] unknown parameters. The function LMFnlsq modifies values of the parameters in such a way that a sum of squares of differences between the measured points and values of the function at positions corresponding the measuted points becomes minimum. In your case, you may chose all pixels describing a rib as the measured set of points, and should the form of a suitable function be known, the function LMFnlsq might be used for finding the optimum parameters of the function.

Munie Ahmad

Dear Prof Balda,

I have a case where I need to fit multiple curve in one image. The image is a rough trace of ribs in BW from chest x-ray. The ribs are processed one side at a time (left and right). I want to use multiple curve fitting to get the exact curve of each rib for left/right. I would like to know if this algorithm is able to solve my problem. Please advise, and many thanks in advance.


Charles Nelatury

L Pir

Richard Crozier

very useful, thanks!


I thank you so much for providing this code. Implementation has its own challenges but your code worked like a charm to show how beautifully theory and practice meet. With good initial estimates, the algorithm converged in only a few iterations.

FYI: I used it for T1 estimation in MR where the equations in our case were much more complicated than single exponential recovery equations.



Man thank you so much, it was amazing!

David Zhang


I was using analytical Jacobian. I also will not obey your evaluation that a failed lsqcurvefit does not deserve a 1 star. The NIST solutions are tested using different methods (hence the standard deviation is also available), so failing even one of them means there are codes out there that performs better, even using the poor starting condition.

Furthermore, 3 of the 5 failure cases are observed data. They are not some cooked up example designed to cause the optimization to fail. Therefore, if used in real world examples, there is a possibility that it will not work. I am simply letting everybody know a caveat cantor that the program solution should be treated with a grain of salt, even if the sum of squares dropped by 8 orders. Although as the website says, passing all tests is not a pre-requisite to be a good optimization software, but don't you think not passing the tests from an agency that establishes industry standards would not be considered as "industry standard"?

I admit giving it 1 star is too aggressive. I do this to get your attention to respond. Indeed it worked. Unfortunately I can't go back and change it. Because this program works for all easy to intermediate difficulty problems, it deserves a 4 star.

Miroslav Balda

@ David:
I thank you for thorough testing of my function LMFnlsq. You have found that 5 functions of the highest difficulty out of total 27 failed. It is true only partially. They failed only for the most bad initial estimates of the solution. The good solution has been obtained with the more close estimate for all above examples but Bennett5. In this case only one digit of sum of squares was valid, however, when sum of squares dropped by 8 orders! All those results have been obtained with embedded function for finite difference Jacobian matrix estimates, what strongly diminishes precission of results. Only analytical Jacobian matrix ensures top result. I am affraid that you also used finite difference approximates.

The NIST collection of testing problems is only a recommendation for software authors. In contrary to your opinion, I am satisfied by the results I have obtained. Especially, after testing a less difficult problem Hahn1, which converged for both estimates of the solution, while the function lsqcurvefit from the Optimization Toolbox (OT) did not find any solution. Would you evaluate it also by one star? Not me, because OT is much better. However, my function is smaler and takes fewer iterations to solve the problem. This is the reason, why I regard your evaluation as a bad joke.
One star means: "withdraw your function from FEX!". I will not obey your evaluation, because there are hundreds and may be thousands of people who downloaded and successfully applied my function for solving their problems. Some of them gave the function the top evaluation, even that I never did not declare it as an "industry standard".

David Zhang

By the way, the failures occurs in


David Zhang

This doesn't work for all the least square tests established by the National Institute of Standard and Technology (NIST).

So this can't be called the industry standard.

Ryan Muir

Works fast and well! However, it has a HUGE learning curve if you don't write code at a very high level...the included examples I found to be less than completely understandable to me, and don't work unless you know how to use them. Instead, here is a VERY SIMPLE program that uses this algorithm to fit a function to a simulated dataset. I took it and modified it from a forum post elsewhere on the internets. Recall that chi^2 = sum(((x - mu)/sigma).^2), and the point of this function is to determine what parameters in mu minimize chi^2:

% Ender 2008-07-08
a = 10, b = -8, c = -.2 %Model parameters
t = (0:.01:1)'; %should be a column vector

data = a + b*(1-exp(t/c)) + .1*randn(size(t)); %Function plus random error; simulates data

res = @(x) data - (x(1) + x(2)*(1-exp(t/x(3))) ); %Numerator of chi^2, with quantity squared implied for fitting in LMFnlsq; also known as residuals

x0 = [5,3,-.1]; %rather poor initial guess of parameters

[x,ssq,cnt] = LMFnlsq(res,x0) %Returns the parameter values which minimized the value of the numerator

%Plot the data
hold on
plot(t,(x(1) + x(2)*(1-exp(t/x(3)))),'r'), grid

Ryan Muir

This program is excellent. I found that it was able to do my nonlinear regression very quickly, with multiple quick and easy options for both analysis and displaying the data correctly. Further, once you figure out how to use the function, performing a nonlinear regression was quite easy.

However, trying to initially figure out how to use the software was quite hard =P. I initially attempted to follow the instructions posted here, which are not complete or correct in its explanation or equations. Once I discovered the included test file, which is correct and will run, I was able to copy and paste the relevant section of code (example 3) into matlab and get instant results. I was then able to pick apart and transform the example for my own purposes. Figuring all that out took quite a while. I took off one star for that upset, but it's really an excellent software package!

Xylar Asay-Davis

This function was extremely helpful for the problem I'm working on: least-squares fit to multidimensional rational polynomials.

Thanks very much!

Neil Dalchau

This is an excellent improvement to LMFsolve, which I only became aware of today. This upgrade has considerably improved some of my existing analyses. Convergence is now 100% for my problem.

Many thanks Miroslav!

Miroslav Balda

Read my answer dated 04 jun 2009. I'll answer you later by e-mail.

Shin-ichi Uehara


I want to do curve fitting of some laboratory results with a model.
The model which I want to use can not explained by a single function,
but is explained by a series of differential equations.
In this case, I do not know how to apply LMFnlsq.
It is very helpful if you give any advice.


(I am applying "fminsearch" to this problem so far, but
the description of "fminsearch" mentions that this code is
not suitable to this kind of problems.)

Jon Doan

Very nice


Miroslav Balda

This part of the submission is intended for Comments and Ratings. More over, I pleased anybody to write me by e-mail on errors and improvements, see the section Other requirements. This is the reason why I'll not communicate here anymore.



how can I give an interval for the results, i don't need negative values for answers or I need some answers higher
than some values,

you also helped me before Mira



i try to run the Example Rosenbrock's function ( SQP)with "xy" in the output arguments ..but it not work shows xy =[ ] empty matrix...can any body clear this to me?...

function ros = rosen(x)
d = sqrt(x(1)^2+x(2)^2)-1.5;
ros = [ 10*(x(2)-x(1)^2) % first part, f1(x)
1-x(1) % second part, f2(x)
(d>0)*d*1000 ];

>> options=LMFnlsq;
>> [x,ssq,cnt,nfJ,XY] = LMFnlsq('rosen',x0,options)


Professor Blada:
I truly apprecate you helping me out to solve nonlinear equation.
Your code worked beautifully for me and thanks again for your prompt feedback in weekend time.
Thanks again, Dr. Balda.


Steven White

This is a great bit of code that does exactly what it says. I used it to replace lsqcurvefit as I no longer have the optimisation toolbox.

I would suggest that the author also considers using additional examples, as the ones given are very complicated to understand if you come from a very different field. However, after a little search I found this by the author, which I found much more useful:

It tells you how to use the function to fit a curve to some data. This should be easily modified for fitting other functions/data etc.

C Schwalm

Levente Hunyadi

Jon Jackson

Works well, thanks.

For others who may be looking for the LMFtest dependencies, they are file ID numbers 9033, 9035 and 11725 (as documented in the pdf file appendix).

Miroslav Balda

I tested the function LMFnlsq once again after reading your comments and an evaluation. I am sorry with you that you are not able to copy and paste ascii lines without errors. You did not try the script LMFtest, which is included in the zip file together with LMFnlsq, where you could get results without any copying. Was it sou difficult to look at it and find how to use the function LMFnlsq. I am afraid of the reaction of other people, who could use even better function than that they may find in the Optimization Toolbox, but seeing your evaluation they will pass it. It is not fair. I thing that you evaluated you yourself.

Ender Wiggin

The solution does not work. I get an error whenever I copy and paste the code than run it on MATLAB

MATLAB Release Compatibility
Created with R2006b
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!