필터 지우기
필터 지우기

Improving LSQCcurvefit solutions through improving x0 through loops

조회 수: 4 (최근 30일)
Sam Mahdi
Sam Mahdi 2019년 3월 12일
편집: Sam Mahdi 2019년 3월 13일
Hello everyone,
I had a quick question in regards to attempting to improve my lsqccurvefit values by improving x0 through multiple datasets.
Data1=[0.596421471 0.06
0.5859375 0.119284294
0.566037736 0.29296875
0.530035336 0.622641509
0.418994413 1.219081272
0.388619855 3.058659218
];
Data2=[5.00E+04 3.82E+04 3.45E+04 3.42E+04 3.74E+04 3.21E+04 2.81E+04 2.88E+04
5.00E+04 3.82E+04 3.45E+04 3.42E+04 3.74E+04 3.21E+04 2.81E+04 2.88E+04
3.08E+04 3.07E+04 2.19E+04 2.23E+04 2.53E+04 2.05E+04 1.98E+04 1.89E+04
2.05E+04 1.87E+04 1.30E+04 1.10E+04 1.62E+04 1.31E+04 1.05E+04 1.05E+04
8963 1.11E+04 6243 3504 6454 4346 4448 4357
0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
];
Pt=Data1(:,1);
Lt=Data1(:,2);
for n = 1:size( Data2, 2 )
Int=Data2(:,n);
...
plot(Lt,Int, 'ro');
xdata=[Pt,Lt]
F=@(x,xdata)((xdata(:,1)+xdata(:,2)+x(2))-((xdata(:,1)+xdata(:,2)+x(2)).^2-4.*xdata(:,1).*xdata(:,2).^1/2).*x(1)/2.*xdata(:,1))
x0=[1 1]
options=optimoptions('lsqcurvefit','MaxFunEvals',5e2)
lb=[];
up=[]
[x,resnorm,~,exitflag,output]=lsqcurvefit(F,x0,xdata,Int,lb,up,options)
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
lb = [];
ub = [];
[x2,resnorm2,~,exitflag2,output2]=lsqcurvefit(F,x0,xdata,Int,lb,up,options)
M=mean(x)
M=mean(x2)
hold on
plot (Lt, F(x,xdata))
plot (Lt, F(x2,xdata))
hold off
end
In a nutshell, I have a function F that is trying to fit my Data 2 using 2 different algorithms. My end plot currently has a poor fit with my initial plot. I've seen that the initial starting values [x0] can have a signifigant impact on your results. Is there a way to have the later iterations in my loop change the x0 value and sample various x0 values and try and get closer to the real value that fits my data well (this can be done by doing an RMSD or something like that with my initial plot and vlues). The simplest way I was thinking of having this done is using having x0 change to the results of the previous run. I.E. The first run using the x0 I gave it, the 2nd x0 that is used in the 2nd run uses the results of the previous run as it's own x0.
I don't quite know how to do this, whether this is a modification through x0 (like my example), or through using a loop within my current loop that runs the same dataset n=1 repeatedly through multiple iterations using the RMSD example I gave above to try and improve it. Then do the same thing for the 2nd dataset (n=2), although I can see this method taking quite a while and maybe potentially creating an endless loop.

답변 (2개)

Matt J
Matt J 2019년 3월 12일
For such a small problem, it is a simple matter to do a vectorized exhaustive search, rather than looping
[X1,X2]=meshgrid(linspace(-50000,0,500), linspace(-60,60,100));
[mm,nn]=size(X1);
XX1=reshape(X1,1,1,[]);
XX2=reshape(X2,1,1,[]);
Y=reshape(vecnorm(F([XX1,XX2],xdata)-Int,inf,1),mm,nn);
[minval,idx]=min(Y(:)),
x0=[X1(idx),X2(idx)]
  댓글 수: 17
Matt J
Matt J 2019년 3월 13일
Note that x.^1/2 is not the square root of x. Compare:
>> 2.^1/2
ans =
1
>> sqrt(2)
ans =
1.4142
Sam Mahdi
Sam Mahdi 2019년 3월 13일
편집: Sam Mahdi 2019년 3월 13일
Oh, well that makes a huge difference. With that modification, my graph is no longer in the negative. Although, now it looks even weirder! I should note, I had forgotten to normalize my data prior, it won't change the plot, but does make the values fit better to the quadratic function you have below.
Data2=[0.914458052 0.698775361 0.630597697 0.625479803 0.684335588 0.587461159 0.512703345 0.526777554
0.914458052 0.698775361 0.630597697 0.625479803 0.684335588 0.587461159 0.512703345 0.526777554
0.535527691 0.534656914 0.38174852 0.388714734 0.440961338 0.356844305 0.34430512 0.328631139
0.401724476 0.36664707 0.253968254 0.215755438 0.316872428 0.257103665 0.206153243 0.204977464
0.182397232 0.224867725 0.127045177 0.071306471 0.131339031 0.088441188 0.090516891 0.088665039
0 0 0 0 0 0 0 0
];

댓글을 달려면 로그인하십시오.


Matt J
Matt J 2019년 3월 13일
편집: Matt J 2019년 3월 13일
I can see that your model function derives in some way from the quadratic formula, but I can't quite see where all the pieces are supposed to go. In any case, using the implementation below, I get a fit that approaches sensible. Maybe you can tweak it to perfection.
for n = 1:size( Data2, 2 )
Int=Data2(:,n);
xdata=[Pt,Lt]
%F=@(x,xdata)((xdata(:,1)+xdata(:,2)+x(1,2,:))-((xdata(:,1)+xdata(:,2)+x(1,2,:)).^2-4.*xdata(:,1).*xdata(:,2).^1/2).*x(1,1,:)/2.*xdata(:,1))
F=@modelfun;
[X1,X2]=meshgrid(linspace(0,1e4,500), linspace(0,60,100));
[mm,nn]=size(X1);
xx1=reshape(X1,1,1,[]);
xx2=reshape(X2,1,1,[]);
Y=reshape(vecnorm(F([xx1,xx2],xdata)-Int,inf,1),mm,nn);
[minval,idx]=min(Y(:)),
x0=[X1(idx),X2(idx)];
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
[x2,resnorm2,~,exitflag2,output2]=lsqcurvefit(F,x0,xdata,Int,[],[],options)
plot (Lt,Int,'ro',Lt, F(x2,xdata))
xlabel 'Lt'
end
function out=modelfun(x,xdata)
Pt=xdata(:,1);
Lt=xdata(:,2);
x1=x(1,1,:);
x2=x(1,2,:);
A=Lt.*x1;
B=-(Pt+Lt+x2);
C=Pt;
out = ( -B + sqrt(B.^2-4.*A.*C) )./(2.*A);
end
  댓글 수: 1
Sam Mahdi
Sam Mahdi 2019년 3월 13일
It is the quadratic equation. The only addition is the quadratic is being multiplied by a variable (x1). You have the right set up in your script in terms of what everything stands for, except x1 is in the numerator, and you currently have it in the denominator. This is why I'm confused as to why my graphs look funky, because if you plot it out using it the way you have it written, you get a nice curve. While my function is essentially the same thing, but provides the above funky results.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Get Started with Curve Fitting Toolbox에 대해 자세히 알아보기

태그

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by