lsqcurvefit not adjusting some parameters as expected

조회 수: 4 (최근 30일)
Leor Greenberger
Leor Greenberger 2019년 7월 26일
답변: Alex Sha 2020년 2월 22일
I am trying to fit some data to a 3 parameter curve expressed as .
What I am finding when using lsqcurvefit is that it converges on what appears to be good values for and but not for c. In fact, c remains unchanged from the guessed value, regardless of its initial value.
First I try with c = p(3) = 1000
x = 0:2047;
y = load('y.mat'); % see attachment
fun = @(p,x)abs(p(1)*(x-p(3))+p(2)*(x-p(3)).^3);
options = optimoptions(@lsqcurvefit,'StepTolerance',1e-10, 'Display', 'iter-detailed', 'FunctionTolerance', 1E-12);
pGuess = [2.5E-5 2.6E-12 1000];
[p,fminres] = lsqcurvefit(fun,pGuess,x,y, [], [], options)
Norm of First-order
Iteration Func-count f(x) step optimality
0 4 0.00635297 1.03e+08
1 8 0.00632173 3.02663e-13 99.9
2 12 0.00623569 9.85058e-05 1.13e+04
3 16 0.00623569 3.31307e-17 0.0114
Optimization stopped because the relative sum of squares (r) is changing
by less than options.FunctionTolerance = 1.000000e-12.
p =
2.58614068198555e-05 1.75916049599052e-12 1000.00009850203
fminres =
0.00623568562340633
Now I try with c = p(3) = 1400:
pGuess = [2.5E-5 2.6E-12 1400];
[p,fminres] = lsqcurvefit(fun,pGuess,x,y, [], [], options)
Norm of First-order
Iteration Func-count f(x) step optimality
0 4 0.168905 9.79e+09
1 8 0.10571 6.45539e-12 1.16e+06
2 12 0.105681 9.10932e-06 1.43e+08
3 16 0.105681 0.000231759 1.43e+08
4 20 0.105681 5.79397e-05 1.43e+08
5 24 0.105681 1.44849e-05 1.43e+08
6 28 0.105681 3.62123e-06 1.43e+08
7 32 0.105681 9.05308e-07 1.43e+08
8 36 0.105681 2.26327e-07 1.43e+08
9 40 0.105681 5.65817e-08 1.43e+08
10 44 0.105681 1.41454e-08 1.43e+08
11 48 0.105681 3.53636e-09 1.43e+08
12 52 0.105681 8.8409e-10 1.43e+08
13 56 0.105681 2.21022e-10 1.43e+08
14 60 0.105681 5.52556e-11 1.43e+08
Optimization stopped because the norm of the current step, 5.525560e-11,
is less than options.StepTolerance = 1.000000e-10.
p =
2.4936115641411e-05 -3.9025705054523e-12 1399.9999908909
fminres =
0.105680830464589
From the data set it is clear that the min occurs at x = 1066.
>> [m,k] = min(y)
m =
0.000292016392169768
k =
1067
>> x(k)
ans =
1066
  댓글 수: 3
Leor Greenberger
Leor Greenberger 2019년 7월 26일
I knew I forgot something! It is attached now. Thanks for looking into this!
Walter Roberson
Walter Roberson 2019년 7월 27일
Your function would benefit from a constraint.
fun = @(p,x)abs(p(1)*(x-p(3))+p(2)*(x-p(3)).^3);
if you feed in -p(1) and -p(2) then you the result will have the same abs() as original p(1) and p(2) . Therefore you can constrain one of the values, such as p(1) to be >= 0, which will reduce searching.

댓글을 달려면 로그인하십시오.

채택된 답변

Matt J
Matt J 2019년 7월 27일
It helps to pre-normalize your x,y data and to use polyfit to generate a smart initial guess,
%data pre-normalization
y=y.'/max(y);
[~,imin]=min(y);
x=(x-x(imin))/max(x);
%generate initial guess
p0=polyfit(x(imin:end),y(imin:end),3);
pGuess = [p0(1),p0(3), 0];
fun = @(p,x)abs(p(1)*(x-p(3))+p(2)*(x-p(3)).^3);
options = optimoptions(@lsqcurvefit,'StepTolerance',1e-10, 'Display', 'iter-detailed', 'FunctionTolerance', 1E-12);
[p,fminres,~,ef] = lsqcurvefit(fun,pGuess,x,y, [], [], options)
plot(x,y,'o-',x(1:20:end),fun(p,x(1:20:end)),'x--y'); shg
untitled.png
  댓글 수: 3
Leor Greenberger
Leor Greenberger 2019년 7월 27일
편집: Leor Greenberger 2019년 7월 27일
Thank you very much for helping with this! The normalization you did is interesting. Ultimately what I need to do is create an algorithm that can find the inflection point with the least number of data samples. That is, I have a digital system with an 11 bit register and what you see here is my characterization of it from code 0 to 2047. I set the register and measured the output of the system. I wonder if this will work if I have enough samples and guess min(y).
Matt J
Matt J 2019년 7월 27일
You're welcome, but please Accept-click the answer if you are satisfied that the fitting code is now working.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Alex Sha
Alex Sha 2020년 2월 22일
The best solution seems to be:
Root of Mean Square Error (RMSE): 8.58712889537096E-5
Sum of Squared Residual: 1.50943288116718E-5
Correlation Coef. (R): 0.999947601349384
R-Square: 0.999895205444387
Adjusted R-Square: 0.999895102905683
Determination Coef. (DC): 0.99989229572485
Chi-Square: 0.00435400203642259
F-Statistic: 9515701.1004098
Parameter Best Estimate
---------- -------------
p1 -2.54821313582336E-5
p2 -2.52958254532916E-12
p3 1062.58024839597

카테고리

Help CenterFile Exchange에서 Surrogate Optimization에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by