Main Content

Assess Predictive Performance

If you plan to use a fitted model for forecasting, a good practice is to assess the predictive ability of the model. Models that fit well in-sample are not guaranteed to forecast well. For example, overfitting can lead to good in-sample fit, but poor predictive performance.

When checking predictive performance, it is important to not use your data twice. That is, the data you use to fit your model should be different than the data you use to assess forecasts. You can use cross validation to evaluate out-of-sample forecasting ability:

  1. Divide your time series into two parts: a training set and a validation set.

  2. Fit a model to your training data.

  3. Forecast the fitted model over the validation period.

  4. Compare the forecasts to the holdout validation observations using plots and numerical summaries (such as predictive mean square error).

Prediction mean square error (PMSE) measures the discrepancy between model forecasts and observed data. Suppose you have a time series of length N, and you set aside M validation points, denoted y1v,y2v,,yMv.. After fitting your model to the first NM data points (the training set), generate forecasts y^1v,y^2v,,y^Mv.

The model PMSE is calculated as

PMSE=1Mi=1M(yivy^iv)2.

You can calculate PMSE for various choices of M to verify the robustness of your results.

Related Examples

More About