Differences in power law fit vs. linear fit on log-log scale

조회 수: 35 (최근 30일)
L'O.G. 2022년 11월 28일
댓글: L'O.G. 2022년 11월 28일
I am fitting data using cftool to a power law, i.e. , and comparing it to the 1st-order polynomial fit, i.e. when I take the base-10 log of the same data in both x and y. The scaling exponent that I get is different: 0.523 in the first case vs. 0.498 in the second case. Why is there any diference? Should I just go by whichever fit has the better goodness-of-fit statistics, or is there some better way of going about this? Attached is the data that I am using as an example. The x values are in the first column, and the y values are in the second column.

댓글을 달려면 로그인하십시오.

채택된 답변

the cyclist 2022년 11월 28일
Hm. I get much closer values when I use fitlm (0.498) and nlinfit (0.504), for the log-space and original space, respectively. (I don't have the curve fitting toolbox.)
It's close enough that I would not have thought twice about it, and chalked it up to convergence criteria, perhaps.
I think it is possible that the fit is not quite mathematically equivalent, because the minimization criteria involves a squared term, and therefore not equivalent when doing the log conversion. I'm not sure, and I'd have to think more carefully about it.
If there is truly a difference, the ideal way to decide whether to solve this in linear or log space would be based on which space obeys the model assumptions better (things like residuals being normally distributed, etc.) Frankly, the models are so close to each other that I'm not sure it much matters, from a pragmatic point of view. But, there is probably a "right" answer, if one needs to be theoretically nitpicky.
% Assign x and y from your C matrix
x = C(:,1);
y = C(:,2);
% Fit the model in log space.
% (I used natural log, not base10 log, but it doesn't matter for the slope term)
mdl = fitlm(log(x),log(y))
mdl =
Linear regression model: y ~ 1 + x1 Estimated Coefficients: Estimate SE tStat pValue ________ _________ _______ ___________ (Intercept) -5.9418 0.0075206 -790.08 3.1655e-188 x1 -0.49792 0.001989 -250.33 2.4397e-139 Number of observations: 100, Error degrees of freedom: 98 Root Mean Squared Error: 0.017 R-squared: 0.998, Adjusted R-Squared: 0.998 F-statistic vs. constant model: 6.27e+04, p-value = 2.44e-139
% Define function that will be used to fit data (F is a vector of fitting parameters)
f = @(F,x) F(1) * x.^(-F(2));
% Do the non-linear fit
F_fitted = nlinfit(x,y,f,[1 1]);
% Display fitted coefficients
disp(['F = ',num2str(F_fitted)])
F = 0.0026755 0.5041
댓글 수: 1이전 댓글 -1개 표시이전 댓글 -1개 숨기기
L'O.G. 2022년 11월 28일
Thank you. This is a wonderfully thoughtful answer.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Matt J 2022년 11월 28일
편집: Matt J 2022년 11월 28일
There is a difference because the statistical distribution of the measurement errors changes under the log transformation. You should go with the one that fits the best. That will probably the model whose errors are most close to Gaussian additive noise, though.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Get Started with Curve Fitting Toolbox에 대해 자세히 알아보기

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by