MATLAB Examples

# Polynomial fitting with polyfit_roots

The script polyfit_roots_drv.m demonstrates the function polyfit_roots that finds the roots and the constant so that the polynomial

is the best least-squares fit to the data at points . This problem is much better conditioned than finding the monomial coefficients of the polynomial, as Matlab's polyfit does. It can therefore be solved for polynomials of much higher degree. After the fit is found, it can be evaluated with the function polyval_roots.

The code was written by Amit Hochman. It borrows ideas from: 1) Robust Rational Function Approximation Algorithm for Model Generation by C. P. Coelho, J. R. Phillips, and L. M. Silveira, 36th annual ACM/IEEE Design Automation Conference.

2) Robust rational interpolation and least-squares by P. Gonnet, R. Pachon and L. N. Trefethen, ETNA, vol. 38, 2011.

## Low-order fit

f = @(x)(1+x).*sin(2*pi*x); 

Sample it at a 100 equally-spaced points on :

m = 100; x = linspace(-1, 1, m); y = f(x); 

and fit a 7-degree polynomial to this data:

n = 7; [lambda, k] = polyfit_roots(x, y, n); length(lambda) 
ans = 7 

Naturally, there are 7 roots, real, though they could be in complex conjugate pairs:

lambda 
lambda = -0.9685 -0.8529 -0.5143 -0.0219 0.5040 0.9924 1.2544 

Now, let's evaluate the fit at the data points:

fitY = k*polyval_roots(lambda, x); 

Of course, the fit is not an interpolant. Hence the following error is not zero:

error = norm(fitY-y)/norm(y) 
error = 0.1255 

and it is a reliable metric for of the quality of the fit throughout the interval.

clf plot(x, y, 'b', x, fitY, 'r--') 

For such a low degree polynomial, we can reliably compute the monomial coefficients by convolving the factors of the polynomial, and multiplying by :

p = [1 -lambda(1)]; for i = 2:length(lambda); p = conv(p, [1 -lambda(i)]); end p = p*k 
p = -21.5569 8.4761 49.4546 -9.7191 -33.7915 1.3359 5.7919 0.1260 

Matlab's polyfit command gives the same result:

pMatlab = polyfit(x, y, n) 
pMatlab = -21.5569 8.4761 49.4546 -9.7191 -33.7915 1.3359 5.7919 0.1260 

Now, let's increase the order, say, to

n = 50; [lambda, k] = polyfit_roots(x, y, n); fitY = k*polyval_roots(lambda, x); error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') 
error = 1.2304e-014 

Note that we plotted real(fitY) because the result has very small imaginary values. How many roots does this polynomial have?

length(lambda) 
ans = 26 

What has happened? polyfit_roots has determined that a 26-degree polynomial is enough to get errors close to its default tolerance, . If we were to use a higher order, we would be fitting round-off errors.

Now, suppose the data is corrupted by noise:

yNoise = y + randn(1, m)*0.1; n = 50; [lambda, k] = polyfit_roots(x, yNoise, n); fitYNoise = k*polyval_roots(lambda, x); length(lambda) plot(x, yNoise, 'b', x, real(fitYNoise), 'r--') 
ans = 50 

Because of the relatively high order, , we are fitting the noise. If we know how large the noise is, polyfit_roots can figure out the correct degree of the polynomial to smooth it out:

tol = 0.1; [lambda, k] = polyfit_roots(x, yNoise, n, tol); fitYDenoised = k*polyval_roots(lambda, x); length(lambda) plot(x, y, 'b', x, real(fitYDenoised), 'r--') 
ans = 8 

## Higher orders

A function with many wiggles (taken from L. N. Trefethen's Approximation theory and approximation practice):

f = @(x)sin(6*x) + sin(60*exp(x)); 

Sample it at a 1000 equally-spaced points on :

m = 1000; x = linspace(-1, 1, m); y = f(x); 

and fit an 200-degree polynomial to this data:

n = 200; tic [lambda, k] = polyfit_roots(x, y, n); fitY = k*polyval_roots(lambda, x); toc error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') length(lambda) 
Elapsed time is 0.826032 seconds. error = 2.0799e-013 ans = 147 

polyfit_roots concluded that a 147-degree polynomial suffices for errors close to its default tolerance, . Let's try using Matlab's polyfit, even with a much lower order, say, :

n = 50; plot(x, y, 'b', x, polyval(polyfit(x, y, n), x), 'r--') 
Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. 

## Runge's example

Let's try polyfit_roots on Runge's example:

f = @(x)1./(1+25*x.^2); 

Sample it at a 100 equally-spaced points on :

m = 100; x = linspace(-1, 1, m); y = f(x); 

set :

n = 30; [lambda, k] = polyfit_roots(x, y, n); length(lambda) 
ans = 30 

and sample the fit on a finer grid

x = linspace(-1, 1, 4000); y = f(x); fitY = k*polyval_roots(lambda, x); error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') 
error = 0.0023 

Note that polyfit_roots will not always lower the degree of the polynomial enough. For example,

m = 100; x = linspace(-1, 1, m); y = f(x); n = 60; [lambda, k] = polyfit_roots(x, y, n); length(lambda) 
ans = 60 

and sample the fit on a finer grid

x = linspace(-1, 1, 4000); y = f(x); fitY = k*polyval_roots(lambda, x); error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') 
error = 1.4508 

## Random data points

The data points need not be equally spaced, and the approximated functions need not be differentiable:

f = @(x)abs(x); x = 2*rand(1000, 1)-1; y = f(x); n = 100; [lambda, k] = polyfit_roots(x, y, n); length(lambda) x = linspace(-1, 1, 4000); y = f(x); fitY = k*polyval_roots(lambda, x); error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') 
ans = 100 error = 0.0013 

## Discontinuous functions

When the approximated function is discontinuous, we have Gibbs's phenomenon:

f = @(x)sign(x); x = linspace(-1, 1, 1000); y = f(x); n = 100; [lambda, k] = polyfit_roots(x, y, n); length(lambda) x = linspace(-1, 1, 4000); y = f(x); fitY = k*polyval_roots(lambda, x); error = norm(fitY-y)/norm(y) plot(x, y, 'b', x, real(fitY), 'r--') 
ans = 99 error = 0.0797 

## Complex data

Fitting complex data works:

phi = linspace(0, 20*pi, 1000); z = exp(1j*phi).*(phi/10+0.5); n = 100; [lambda, k] = polyfit_roots(phi, z, n); length(lambda) fitZ = k*polyval_roots(lambda, phi); error = norm(fitZ-z)/norm(z) plot(z, 'b'); hold on; plot(fitZ, 'r--'); hold off axis equal 
ans = 65 error = 1.6885e-013 

as well as fitting at complex locations:

z = exp(1j*phi); f = @(z)1./(z-1.5); n = 100; [lambda, k] = polyfit_roots(z, f(z), n); length(lambda) x = linspace(-1, 1, 100); [X,Y] = meshgrid(x); Z = X+1j*Y; exact = f(Z); fit = k*polyval_roots(lambda, Z); subplot(121); imagesc(x, x, imag(exact)), hold on, plot(z,'k--') title('$f(z) = (z-1.5)^{-1}$, Im($f$)') axis image subplot(122); imagesc(x,x, imag(fit)), hold on, plot(z,'k--') title('Polynomial fit based on values on the unit circle') axis image 
ans = 100