Unstable derivative approximation when steps get too small

조회 수: 42 (최근 30일)
Eli
Eli 2020년 1월 2일
댓글: Eli 2020년 1월 3일
lim f(x+h) - f(x) / h
h-->0
Approximation improves when h gets smaller, but when h gets too small approximation gets worse. When h=0 the result is Nan 0/0 and that's ok.
But I don't understand the reason why when h is too small it loses stability.
function fidiff(x)
if nargin<1
x=1;
end
fp = exp(x); %exact result of derivative
n=15;
h = logspace(-(n-1),0,n)'; % column vector of steps
fprintf(' h fp fpfd AErr RErr\n');
for k=n:-1:1
fpfd = (exp(x+h(k)) - exp(x))/h(k); % derivative computation
AErr(k) = abs(fpfd - fp); % absolute error
RErr(k) = abs((fpfd - fp)/fp); % relative error
fprintf('%8.1e %12.5f %12.5f %9.2e %9.2e\n',h(k),fp,fpfd,AErr(k),RErr(k));
end
Thank you so much.

채택된 답변

John D'Errico
John D'Errico 2020년 1월 2일
편집: John D'Errico 2020년 1월 2일
This is a fundamental idea about floating point numbers that you need to understand, at least if you will work with computers in any kind of numerical setting.
When you do any computations using a floating point number (a double) only a FINITE number of bits are stored. It is effectively about 16 decimal digits. However, when you subtract two numbers that are very close to each other in value, a phenomena callled massive subtractive cancelation occurs. For example, consider the following computation:
1/3 - 0.33333333333333
ans =
3.33066907387547e-15
While you might have hoped to see a difference something like: 0.000000000000003333333333333 that is simply not possible in floating point arithmetic.
Look at what happens in a typical scenario. We want to differentiate some function f. f(x)=exp(x) is a good example. One reason for that choice is because the true derivative is easy to compute, as f'(x)=exp(x) also. So, compute the derivative at x==1, using the classic formula.
x = 1;
dx = logspace(-16,0,17)';
format long g
f = @exp;
[dx,f(x + dx) - f(x),(f(x + dx) - f(x))./dx]
ans =
1e-16 0 0
1e-15 2.66453525910038e-15 2.66453525910038
1e-14 2.66453525910038e-14 2.66453525910038
1e-13 2.71338507218388e-13 2.71338507218388
1e-12 2.71827005349223e-12 2.71827005349223
1e-11 2.71827005349223e-11 2.71827005349223
1e-10 2.71827893527643e-10 2.71827893527643
1e-09 2.71828159981169e-09 2.71828159981169
1e-08 2.71828177744737e-08 2.71828177744737
1e-07 2.71828196396484e-07 2.71828196396484
1e-06 2.71828318698653e-06 2.71828318698653
1e-05 2.71829541991231e-05 2.71829541991231
0.0001 0.000271841774707848 2.71841774707848
0.001 0.00271964142253278 2.71964142253278
0.01 0.0273191865578708 2.73191865578708
0.1 0.285884195487388 2.85884195487388
1 4.6707742704716 4.6707742704716
So the third column is the derivative estimate, where the real derivative should be:
exp(1)
ans =
2.71828182845905
As you can see, it is terrible for large values of dx. This we should expect. But why is it poor when dx is small, really tiny? The problem here is that of subtractive cancellation.
[f(x),f(x+1e-15)]
ans =
2.71828182845905 2.71828182845905
In fact, EVERY single digit show there in those two numbers is the same. Those digits are all that MATLAB stores for those numbers. So when we subtract two numbers that are so close together, we get what is essentially garbage. Compare that to a higher precision computation.
vpa(exp(sym(1)))
ans =
2.7182818284590452353602874713527
vpa(exp(sym(1) + sym('1e-15')))
ans =
2.7182818284590479536421159303993
As you can see there, the two numbers are not in fact identical. Stuff happpens down below the 16th decimal place, that was lost when we used doubles.
Now if we do the same computation, except that we will use symbolic tools to do the arithmetic, as you can see, everything works very nicely.
vpa(exp(sym(1) + sym('1e-15')) - exp(sym(1)))/sym('1e-15')
ans =
2.7182818284590465945012015276265
exp(1)
ans =
2.71828182845905
So when we used doubles to do all of the arithmetic, we would find the best result came when dx was roughly 1e-8, which just happens to be roughly sqrt(eps(1)).
sqrt(eps(1))
ans =
1.49011611938477e-08
When dx is smaller than that value, we start to see problems. When dx is larger, we have other problems. A numerical analysis course might even have you thinking about why that behavior exists and how to understand it in more depth.

추가 답변 (1개)

Fabio Freschi
Fabio Freschi 2020년 1월 2일

카테고리

Help CenterFile Exchange에서 Digital Filtering에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by