Non Linear Fit help?
조회 수: 13 (최근 30일)
이전 댓글 표시
I have been trying to fit some data with a model for some time now and am getting fairly close. In origin pro the fit seems to work ok and I get reasonable parameters back albeit the residuals are a little funny. Unfortunately in Matlab the fit is not so good. Could I send some one my code my function and some data to take a look at?
Oh and also there are imaginary numbers!
clear all;
close all
close all hidden
A = uiimport;
clear xdata;
clear ydata;
xdata = A.data(:,1);
ydata = A.data(:,2);
Vg = xdata;
Id = abs(ydata);
x0 = [8.6E-10;1.7;0.8;5E6];
options = optimset('Display','iter',...
'TolFun',1E-100,'TolX',1E-100,'MaxIter',1000);
[beta,resid,J,COVB,mse] = nlinfit(Vg,Id,@RcFun,[x0],options);
[ci se] = nlparci(real(beta),resid,'covar',COVB);
Id_new = ((((real(beta(1))*((Vg-real(beta(2))).^(real(beta(3))+1))).^(-1))+real(beta(4))).^(-1));
plot(Vg,Id_new,'r',Vg,Id,'o');
hold on;
plot(Vg,resid,'+');
function F = Rcfun(a,Vg)
K = real(a(1));
V = real(a(2));
b = real(a(3));
R = real(a(4));
F = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1);
end
Data
A B
0 3.03E-12
1 1.5E-13
2 1.58E-12
3 2.81E-12
4 2.55E-12
5 2.31E-12
6 4.13E-12
7 2.89E-12
8 4.99E-12
9 6.38E-12
10 1.068E-11
11 1.96E-11
12 5.343E-11
13 5.405E-11
14 5.347E-11
15 5.142E-11
16 2.4139E-10
17 7.4428E-10
18 1.5752E-9
19 2.7328E-9
20 4.3347E-9
21 6.5506E-9
22 9.5258E-9
23 1.31356E-8
24 1.72672E-8
25 2.17876E-8
26 2.66302E-8
27 3.18252E-8
28 3.7101E-8
29 4.23594E-8
30 4.78078E-8
31 5.32604E-8
32 5.86136E-8
33 6.39262E-8
34 6.93234E-8
35 7.47466E-8
36 8.01152E-8
37 8.54398E-8
38 9.08214E-8
39 9.62598E-8
40 1.0184E-7
41 1.074E-7
42 1.1322E-7
43 1.1876E-7
44 1.2432E-7
45 1.299E-7
46 1.3534E-7
47 1.4062E-7
48 1.4596E-7
49 1.5096E-7
50 1.558E-7
51 1.6118E-7
52 1.6616E-7
53 1.7064E-7
54 1.7546E-7
55 1.7946E-7
56 1.8402E-7
57 1.8776E-7
58 1.9138E-7
59 1.9584E-7
60 1.9992E-7
댓글 수: 7
Teja Muppirala
2012년 7월 24일
Just curious, but, what are the good values of K,V,b,R that you obtained through origin?
Teja Muppirala
2012년 7월 24일
Ah. Sorry. You had it down below and I missed it. k = 8.6E-10 V = 17.3 b = 0.618 R = 2.3E6
채택된 답변
Star Strider
2012년 7월 22일
편집: Star Strider
2012년 7월 22일
Thank you for clarifying ‘RcFun’. After working with your code for a bit, the problem definitely seems to be the ‘(Vg-V)’ term. The only way I am able to get a decent fit without complex parameters is to use ‘lsqcurvefit’ and constraining ‘V’ to not be greater than zero, but then ‘nlparci’ wouldn't calculate confidence intervals on the parameter estimates. [I was able to get a good fit with ‘nlinfit’ by redefining that term to ‘abs(Vg-V)’, but that changed your function.] With ‘lsqcurvefit’, your ‘options’ statement and structure remains the same, although I increased ‘MaxIter’ and ‘MaxFunEvals’ in mine for diagnostic purposes. I also made ‘RcFun’ a single-line anonymous function for convenience:
RcFun = @(a,Vg) 1./(1./(a(1).*(Vg-a(2)).^(a(3)+1)) + a(4));
Lobnd = [];
Upbnd = ones(4,1)*realmax;
Upbnd(2) = 0; % Constrain ‘V’ to not be greater than zero
[beta,resnrm,resid,xitflg,outpt,lmda,J] = lsqcurvefit(RcFun,x0,Vg,Id,Lobnd,Upbnd,options); % NOTE: ‘Answers’ wrapped this line
and got an acceptable fit with these parameters:
K = 35.6744e-012
V = -32.7518e-003
b = 1.1302e+000
R = 145.7789e-003
The norm of the residuals was 5.2916E-015. I'm not certain what you're doing or what data you're modeling, so I can't interpret these or their validity.
In spite of the decent fit, the residuals had a definite oscillating trend.
댓글 수: 7
Star Strider
2012년 7월 23일
I would appreciate the PDFs. My circuit analysis and synthesis knowledge is relatively basic, but they could help me understand what you're doing.
This is the code snippet I've used:
RcFun = @(a,Vg) -(1./(1./(a(1).*(Vg-a(2)).^(a(3)+1)) + a(4)));
% RcFun = @(a,Vg) 1./(1./(a(1).*abs(Vg-a(2)).^(a(3)+1)) + a(4));
Vg = -Data(:,1);
Id = Data(:,2);
% x0 = [8.6E-10;1.7;0.8;5E6];
x0 = rand(4,1).*[1E-10; 1; 1; 1E+6];
% x0 = rand(4,1).*1E-1;
% options = optimset('Display','iter', 'TolFun',1E-100,'TolX',1E-100,'MaxIter',5000, 'MaxFunEvals', 2500);
options = optimset('Display','final', 'TolFun',1E-100,'TolX',1E-100,'MaxIter',5000, 'MaxFunEvals', 2500);
[beta,resid,J,COVB,mse] = nlinfit(Vg,Id,RcFun,[x0],options);
% Lobnd = [];
% Upbnd = ones(4,1)*realmax;
% Upbnd(2) = 0;
% [beta,resnrm,resid,xitflg,outpt,lmda,J] = lsqcurvefit(RcFun,x0,Vg,Id,Lobnd,Upbnd,options);
beta_est = beta
absbeta_est = abs(beta)
se = sqrt(diag(COVB/size(Data,1)));
if isreal(beta)
ci = nlparci(beta,resid,'covar',COVB);
% ci = nlparci(beta,resid,'jacobian',J);
end
Id_new = RcFun(beta,Vg);
figure(4096)
plot(Vg,real(Id_new),'r',Vg,Id,'o')
hold on
plot(Vg,imag(Id_new),'--r',Vg,Id,'o')
plot(Vg,real(resid),'+')
plot(Vg,imag(resid),'x')
hold off
grid
figure(4097)
plot(Vg,abs(Id_new),'r',Vg,Id,'o')
hold on
plot(Vg,abs(resid),'+')
hold off
grid
I didn't include the code and data I copied from your original post. I decided to keep in the lines I used for various diagnostic options, including my call to ‘lsqcurvefit’ and a variation on the objective function I tried.
추가 답변 (6개)
Adam Parry
2012년 7월 23일
편집: Walter Roberson
2012년 7월 25일
댓글 수: 19
Star Strider
2012년 7월 24일
I gave it some thought too overnight. (However I'm reluctant to change nlinfit or the others.) I thought about the weighting vector, and am considering:
WgtV = sqrt(Id);
thus forcing the routine to ignore the region where (Id = 0). I've done this successfully in the past with other functions and data, but in those situations using the usual inverse-variance weighting vectors, (clearly inappropriate here since there are no variances on the measurements).
The other thing I thought of is that since I have the Global Optimization Toolbox and a fair amount of experience with genetic algorithms (GA), I'm going to give that a go. We already have a good start on the fitness function (the objective function you derived), and setting up a weighted metric [to avoid the (Vg < V) and (Id = 0) problems]. An Euclidean distance measure is probably the easiest, although we can get creative and use something more robust if necessary. The problem is that GAs usually take a while to converge, but have the significant benefit that they are essentially immune to the response surface because they completely ignore it.
If we're not happy with the GA results, we'll give GlobalSearch a go, even though that may take longer than GA to converge. It is the most likely to produce the best solution. With luck, we'll have its results today or tomorrow.
These are problems I'll have to solve for my own applications in the not distant future, so I'm interested in learning about their solutions.
Teja Muppirala
2012년 7월 24일
This problem is clearly difficult. It has multiple minima, and a strong dependence on initial conditions. Instead of NLINFIT, I tried two different approaches which both give me good fits for values of Vg > 18.
In both cases, since it seems this equation can't really model the Vg < 18 region, I weighted that region to be zero. Also, just to scale things better, I am working with the log10's of K and R.
Option 1: Pattern Search Algorithm from the Global Optimization Toolbox
This gives a solution that is almost identical to the one that you found from your other software.
G = @(X) (((10.^(X(1)).*(Vg-X(2)).^(X(3)+1)).^(-1))+10.^(X(4))).^(-1);
E = @(X) ( sum(((real(G(X)) - Id)).^2 .* (Vg >= 18)) );
opts = psoptimset;
opts.Display = 'iter';
opts.MaxIter = realmax;
opts.MaxFunEvals = realmax;
X0 = [log10(8.6E-10);1.7;0.8;log10(5E6)];
X = patternsearch(E,X0,[],[],[],[],[],[],[],opts);
% Your previous answer
K = 8.6E-10; V = 17.3; b = 0.618; R = 2.3E6;
F0 = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1);
% Comparing results: They are virtually identical
plot(Vg, Id);hold all; plot(Vg, G(X)); plot(Vg,F0);
Option 2: Simply using FMINSEARCH from various starting points
Again, identical to the previous results.
G = @(X) (((10.^(X(1)).*(Vg-X(2)).^(X(3)+1)).^(-1))+10.^(X(4))).^(-1);
E = @(X) ( sum(((real(G(X)) - Id)).^2 .* (Vg >= 18)) );
opts = optimset('fminsearch');
opts.MaxIter = realmax;
opts.MaxFunEvals = realmax;
options.TolFun = realmin;
options.TolX = realmin;
X0 = [log10(8.6E-10);1.7;0.8;log10(5E6)];
Emin = Inf;
for n = 1:100
disp(n)
[Xn, err(n)] = fminsearch(E,X0.*(1+0.01*randn(size(X0))),opts);
if err(n) < Emin
Emin = err(n);
X = Xn;
end
end
% Your previous answer
K = 8.6E-10; V = 17.3; b = 0.618; R = 2.3E6;
F0 = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1);
% Comparing results: They are virtually identical
plot(Vg, Id);hold all; plot(Vg, G(X)); plot(Vg,F0);
댓글 수: 2
Teja Muppirala
2012년 7월 24일
Ah! Sorry sorry sorry. There's actually a MUCH easier way to make all of this work. The important point is to set the model equal to zero whenever it goes imaginary. I bet your other software probably was doing this automatically; MATLAB does not. See how I modified rcfun below to include this line:
F = F.*(~imag(F));
This sets F to be identically zero whenever it has an imaginary component.
Anyways, copy the entire function below into a file and run it from the command line:
>> [BETA,Rout,J,COVB,MSE] = domyfit
and it will work perfectly (and it should work for other data in general). The standard errors on your coefficients are given by diag(COVB).
function [BETA,Rout,J,COVB,MSE] = domyfit
A = struct;
A.data = [0 3.03E-12
1 1.5E-13
2 1.58E-12
3 2.81E-12
4 2.55E-12
5 2.31E-12
6 4.13E-12
7 2.89E-12
8 4.99E-12
9 6.38E-12
10 1.068E-11
11 1.96E-11
12 5.343E-11
13 5.405E-11
14 5.347E-11
15 5.142E-11
16 2.4139E-10
17 7.4428E-10
18 1.5752E-9
19 2.7328E-9
20 4.3347E-9
21 6.5506E-9
22 9.5258E-9
23 1.31356E-8
24 1.72672E-8
25 2.17876E-8
26 2.66302E-8
27 3.18252E-8
28 3.7101E-8
29 4.23594E-8
30 4.78078E-8
31 5.32604E-8
32 5.86136E-8
33 6.39262E-8
34 6.93234E-8
35 7.47466E-8
36 8.01152E-8
37 8.54398E-8
38 9.08214E-8
39 9.62598E-8
40 1.0184E-7
41 1.074E-7
42 1.1322E-7
43 1.1876E-7
44 1.2432E-7
45 1.299E-7
46 1.3534E-7
47 1.4062E-7
48 1.4596E-7
49 1.5096E-7
50 1.558E-7
51 1.6118E-7
52 1.6616E-7
53 1.7064E-7
54 1.7546E-7
55 1.7946E-7
56 1.8402E-7
57 1.8776E-7
58 1.9138E-7
59 1.9584E-7
60 1.9992E-7];
xdata = A.data(:,1);
ydata = A.data(:,2);
Vg = xdata;
Id = abs(ydata);
X0 = [log10(8.6E-10);1.7;0.8;log10(5E6)];
[BETA,Rout,J,COVB,MSE] = nlinfit(Vg,Id,@rcfun,X0);
K = 8.6E-10; V = 17.3; b = 0.618; R = 2.3E6;
F0 = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1);
plot(Vg, Id);hold all; plot(Vg,rcfun(BETA,Vg)); plot(Vg,F0);
function F = rcfun(X,Vg)
F = (((10.^(X(1)).*(Vg-X(2)).^(X(3)+1)).^(-1))+10.^(X(4))).^(-1);
F = F.*(~imag(F));
댓글 수: 9
Star Strider
2012년 7월 25일
SUCCESS!
After all that, it came down to the initial conditions, although I also made some changes to ‘RcFun’ to make it work seamlessly with both ‘nlinfit’ and ‘lsqcurvefit’.
The new initial conditions:
x0 = rand(4,1).*[1E-13; 1; 1; 1E+7];
and the new objective function:
function [F,J] = RdFun(a,Vg)
% Adam Parry’s Organic FET Function
% DATE: 2012 07 23
%
%
%
% F = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1)
% w.r.t: K, V, b, R
% respectively a(1), a(2), a(3), a(4)
% x0 = [ 8.6E-10 ;1.7 ;0.8 ;5E6];
if a(2) > Vg
F = 0;
return
end
F = a(1)./(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1));
Jfcn = @(a,Vg) [1./(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1)) - (a(1).*a(4))./(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1)).^2; -(a(1).*(a(3) + 1))./((Vg - a(2)).^(a(3) + 2).*(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1)).^2); (a(1).*log(Vg - a(2)))./((Vg - a(2)).^(a(3) + 1).*(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1)).^2); -a(1).^2./(a(1).*a(4) + 1./(Vg - a(2)).^(a(3) + 1)).^2]';
if nargout > 1
J = Jfcn(a,Vg);
if any(size(J) == 1)
J = Jfcn(a,Vg');
elseif ~any(size(J) == 1)
return
end
end
% ========================= END: RcFun.m =========================
produces the parameters:
Parameter Estimates:
K = 3.254448861356919E-10 0.000000000000000E+00j
V = 1.559046341560319E+01 0.000000000000000E+00j
b = 9.096959820500091E-01 0.000000000000000E+00j
R = 2.828879053496115E+06 0.000000000000000E+00j
Parameter Estimates:
|K| = 3.254448861356919E-10
|V| = 1.559046341560319E+01
|b| = 9.096959820500091E-01
|R| = 2.828879053496115E+06
and it only took ‘nlinfit’ 2.075 seconds to converge.
However they do not appear to be unique because these produce and equivalently close fit:
Parameter Estimates:
K = 6.692378493392830E-10 0.000000000000000E+00j
V = 1.685567986182001E+01 0.000000000000000E+00j
b = 6.960550385747625E-01 0.000000000000000E+00j
R = 2.476857099346900E+06 0.000000000000000E+00j
Parameter Estimates:
|K| = 6.692378493392830E-10
|V| = 1.685567986182001E+01
|b| = 6.960550385747625E-01
|R| = 2.476857099346900E+06
as do these:
Parameter Estimates:
K = 3.054487171664253E-10 0.000000000000000E+00j
V = 1.547909799526304E+01 0.000000000000000E+00j
b = 9.280342803415647E-01 0.000000000000000E+00j
R = 2.854955008915018E+06 0.000000000000000E+00j
Parameter Estimates:
|K| = 3.054487171664253E-10
|V| = 1.547909799526304E+01
|b| = 9.280342803415647E-01
|R| = 2.854955008915018E+06
that produced these 95% confidence intervals:
137.6613e-012 473.2361e-012
14.4356e+000 16.5226e+000
772.1086e-003 1.0840e+000
2.6426e+006 3.0673e+006
and this covariance matrix:
8.2089e-021 49.8600e-012 -7.6134e-012 -9.6597e-006
49.8600e-012 317.5260e-003 -45.6541e-003 -54.6273e+003
-7.6134e-012 -45.6541e-003 7.0893e-003 9.1732e+003
-9.6597e-006 -54.6273e+003 9.1732e+003 13.1525e+009
although sometimes it still had a bit of a problem converging. The residuals were reasonably flat with no significant trend, at least in my opinion. I guess that's the result of the ‘rand’ component of the initial parameter guess vector. I'm still going to work on the GlobalSearch, but I feel reasonably confident about these, considering the fit and the residuals. Unfortunately, ‘lsqcurvefit’ consistently failed to find a fit with the same initial parameter guesses. I'll let you know what GlobalSearch comes up with. It may take a few days for me to learn about and successfully run it.
You might want to experiment a bit with it yourself to see what the various parameter estimates look like. That will give you a better idea than even the confidence intervals will about the region of fit.
Teja Muppirala
2012년 7월 25일
"How do you go from log10 parameters to estimating the error in the parameter itself using COVB??"
It's very easy, you just use the chain rule.
My change of coordinates was y = 10^x
So then dy/dx = 10^x * log(10)
Let A = Covariance in regular coordinates (this is what you want) and B = Covariance in log coordinates (this is what you have)
Then you can find A as
A = 10.^(B) * log(10)
In hindsight, I think this was all a bad idea, and you should just do it without any rescaling, so you don't have to worry about these kinds of things.
댓글 수: 17
Star Strider
2012년 7월 27일
The nice thing is that fitting straight lines only require two parameters. However you have four in your model, so I don't see what the advantage is in that unless you can ignore two of them. When I did this just now, I got estimates for R of 195.8839E+006 and for V of 19.9989E+000.
I believe the nature of your data and the nature of your model make fitting your data difficult. The model doesn't really account for the sharp discontinuity where Vg=V, or for the very small oscillations in the Id data, that are likely real although quantitatively unimportant. (When I looked at them on the model I estimated from the parameter sets I posted, the magnitude of the oscillations with the best parameter estimates was about 1% of the value of the function at that point.)
The problem is that taking the logs of the parameters and then reconsituting them in the model changes the model. It's easier to see if instead of using 10.^a(1) and 10.^a(4) you substitute exp(a(1)) and exp(a(4)). That's not the model you specified and it doesn't make any sense in the context of the papers you referred me to, so I am not using the parameter transformation.
A Taylor series approximation might still be the way to go initially to get a good start on the parameter estimates, but you will have to change your data and model to avoid the possiblity of ‘(-V).^p’. Changing ‘(Vg-V)’ to ‘(V-Vg)’ does this, and likely only requires changing the Vg data to -Vg. In the Taylor series I sent along, to make that transformation simply change the signs of ‘V’ (a(2)) and ‘Vg’. I suggest that only for the Taylor approximation in order to get a good initial set of stable parameter estimates.
The parameter estimates I posted earlier are among the best I got. You can plug them in directly and calculate the relevant statistics. There is a small oscillation in the residuals, but it is barely discernable. This set took less than three seconds to converge and produced an MSE = 1.3182E-018:
Parameter Estimates:
|K| = 2.627883069159599E-10
|V| = 1.521501224663270E+01
|b| = 9.713005068703799E-01
|R| = 2.914483559512664E+06
I gave up on GlobalSearch because GlobalSearch either didn't find a global minimum or got caught in a loop that required me to either stop or completely exit MATLAB, and so about which it gave me no informaton. It seemed to have the most trouble fitting ‘R’ (it seemed to persisstently underestimate it), but didn't seem to have trouble with the other parameters.
I suggest you go with the parameter estimates that make the most sense and that you are the most comfortable with. I know nothing about Origin, how it decides the region to fit, or how it calculates its parameters, so I can't advise you on what parameter set is best.
I'll be glad to continue to provide what help I can.
Adam Parry
2012년 7월 30일
댓글 수: 24
Star Strider
2012년 8월 3일
편집: Star Strider
2012년 8월 3일
I don't know how MATLAB routines could decide for themselves what part of the data to use. I have no idea how Origin managed to fit your function as well as it did and only over the linear region of fit that produced real parameters, assuming you gave it and MATLAB the same function and data.
I don't know the data you're commenting on. Since I know only the data from the present problem, I suggest you start the fit with the greatest ‘Id’ data, take a range of those data, do the fit, and estimate ‘V’. Then take the data greater than the estimated ‘V’ and fit all of it. If your parameters are linear functions of your data (as ‘K’ and ‘R’ were here), and if your ‘Id’ data are again on the order of 1E-7, you can scale the ‘Id’ data and then ‘un-scale’ ‘K’ and ‘R’ by the same factor. Since ‘V’ and ‘b’ are not linear functions of either variable, you have to fit them as they are. So you can't scale ‘Vg’.
Your Jacobian looks fine to me. The Optimization and Statistics Toolboxes use different conventions — Statistics uses column-major, Optimization row-major — so to get your Jacobian to work with both, you have to transpose your ‘VgX’ vector to use the Optimization functions. If your data are column vectors, the Jacobian as listed should work with Statistics Toolbox functions, but you'll have to transpose the ‘VgX’ vector to make it work with Optimization Toolbox functions. Take the ‘RcFun.m’ I sent you, rename it to your present problem, then paste your ‘F’ and ‘J’ functions in it in place of the ones I provided. It should work fine. Note that my variable ‘Jcbn’ is an anonymous function, so be sure to paste your ‘J’ expression there and change the argument from ‘Vg’ to ‘VgX’.
That should work.
Alex Sha
2019년 4월 26일
For the fitting function: F = (((K.*(Vg-V).^(b+1)).^(-1))+R).^(-1);
Obviously,should be: V < min(Vg) = 0, the reuslt may be:
Root of Mean Square Error (RMSE): 2.54378147359793E-9
Sum of Squared Residual: 3.94720275310625E-16
Correlation Coef. (R): 0.999399159124327
R-Square: 0.998798679258411
Parameter Best Estimate
---------- -------------
k 6.10183988149707E-14
v -5.67776067411581E-15
b 3.02503323118395
r 3951289.1816463
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Interpolation of 2-D Selections in 3-D Grids에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!