필터 지우기
필터 지우기

Hyperbolic Least Squares Interpolation

조회 수: 6 (최근 30일)
Georg Söllinger
Georg Söllinger 2016년 9월 10일
댓글: Star Strider 2016년 9월 10일
Hello Everybody,
I have got 4 datapoints from trials. They seem to be aligned in a hyperbolic manner. So what i want to do is to find the least squares regression of those values with a kind of a/(bx+c)-Function, where the c-value is equal to zero.
Does matlab provide a sort of standard-function like polyfit for such a problem? Or is it possible to modify the data in a way (coordinate-transformation) to apply polyfit?
Thanks for your help! Georg

채택된 답변

Star Strider
Star Strider 2016년 9월 10일
편집: Star Strider 2016년 9월 10일
You can use core MATLAB functions to do the regression:
x = ...; % Independent Variable
y = ...; % Dependent Variable
fcn1 = @(b,x) b(1)./(b(2).*x + b(3)); % Objective Function #1
fcn2 = @(b,x) b(1)./(b(2).*x); % Objective Function #2
SSECF = @(b) sum((y - fcn2(b,x)).^2); % Sum-Squared-Error Cost Function (Use ‘fcn2’ Here)
B0 = [1; 1]; % Initial Parameter Estimates
[B,SSE] = fminsearch(SSECF, [1; 1]); % Estimate Parameters
xv = linspace(min(x), max(x));
figure(1)
plot(x, y, 'bp')
hold on
plot(xv, fcn2(B,xv), '-r')
hold off
grid
I tested this with random vectors and it ran without error.
EDIT Note that the two-parameter model you want requires only one parameter. A simple ratio (or product) of parameters will not uniquely identify either of them, only the ratio (or product). The three-parameter model actually makes sense.
  댓글 수: 2
Georg Söllinger
Georg Söllinger 2016년 9월 10일
Thanks a lot for your help, it works very well!! So this approach should work for each arbitrary function, doesn't it?
Star Strider
Star Strider 2016년 9월 10일
My pleasure!
It should work for any well-characterised objective function you give it. The ‘B0’ vector has to have one element for each parameter that you want to estimate. The closer the initial estimates are to the ‘best’ fit (in both magnitude and sign), the better.
The Nelder-Meade algorithm used in fminsearch works best when it is minimising at most 7 parameters. Since it is derivative-free, it is more likely to converge than those that use a Jacobian matrix.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Regression에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by