How to avoid error calculation going to infinity when dividing by near-zero values?

조회 수: 11 (최근 30일)
Hello,
I am calculating the relative error between two signals using the next equation:
relative_error(i) = abs((measured(i)-estimated(i))./measured(i))*100;
The problem arises when measured is zero or near zero that relative_error explodes.
Do you know a more appropriated way of calculating an error between two signals that avoids this problem?
Thanks,
Cerilet

답변 (1개)

Honglei Chen
Honglei Chen 2017년 10월 24일
There are several ways to handle this, for example, you can add one more line to set all those points to a predefined value
relative_error(i) = abs((measured(i)-estimated(i))./measured(i))*100;
relative_error(measured==0) = 1; % say predefined value is 1
Or just add an epsilon to avoid this issue
epsilon = 1e-8;
relative_error(i) = abs((measured(i)-estimated(i))./(measured(i)+epsilon))*100;
HTH

카테고리

Help CenterFile Exchange에서 Correlation and Convolution에 대해 자세히 알아보기

제품

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by