## Assess Predictive Performance

If you plan to use a fitted model for forecasting, a good practice is to assess the predictive ability of the model. Models that fit well in-sample are not guaranteed to forecast well. For example, overfitting can lead to good in-sample fit, but poor predictive performance.

When checking predictive performance, it is important to not use your data twice. That is, the data you use to fit your model should be different than the data you use to assess forecasts. You can use cross validation to evaluate out-of-sample forecasting ability:

Divide your time series into two parts: a training set and a validation set.

Fit a model to your training data.

Forecast the fitted model over the validation period.

Compare the forecasts to the holdout validation observations using plots and numerical summaries (such as predictive mean square error).

Prediction mean square error (PMSE) measures the discrepancy between model forecasts and observed data. Suppose you have a time series of length *N*, and you set aside *M* validation points, denoted $${y}_{1}^{v},{y}_{2}^{v},\dots ,{y}_{M}^{v}.$$. After fitting your model to the first *N* – *M* data points (the training set), generate forecasts $${\widehat{y}}_{1}^{v},{\widehat{y}}_{2}^{v},\dots ,{\widehat{y}}_{M}^{v}.$$

The model PMSE is calculated as

$$\text{PMSE}=\frac{1}{M}{{\displaystyle \sum _{i=1}^{M}\left({y}_{i}^{v}-{\widehat{y}}_{i}^{v}\right)}}^{2}.$$

You can calculate PMSE for various choices of *M* to verify the robustness of your results.