Training Calendar

A Practical Guide to Forecasting

  • Location: University of Oxford
  • Duration: 3 days
  • Software: OxMetrics
  • Level: Intermediate
  • Delivered By: Jennifer L. Castle from Economics Department, Oxford and Reading Universities, UK; David F. Hendry from Economics Department, Oxford and Reading Universities, UK
  • Topic: Econometrics, Forecasting, Time series
A Practical Guide to Forecasting

7th-9th August 2018

Who should attend

This course is aimed at economists and applied econometricians who work with time-series data and want to keep up-to-date with some major recent developments in applied econometric modelling for forecasting.

Learning outcomes

  • Develop skills in selecting econometric models
  • producing and evaluating forecasts and understanding when forecasts are likely to be accurate or not
  • Exposure to the OxMetrics software suite and the R Gets package.

Learning style

The course is applied. All lectures will be a combination of theory and practical sessions. The course provides an introduction to the theory and practice of economic forecasting facing a nonstationary and evolving world, when the model differs from the data generation process (DGP). It covers the methodology, practice, implementation and evaluation of economic forecasting, the main sources of forecast error leading to the possible robustification of forecasts. The framework, its basic concepts and implications will be explained for integrated-cointegrated processes intermittently subject to outliers and structural breaks. Live applications to empirical time series will demonstrate the approach.

Click here to view the full course agenda

Course Agenda


The course will focus on understanding why forecasts are accurate or not, using mostly macroeconomic examples to motivate the analysis. A taxonomy of forecast errors will be used, allowing for structural change at the forecast origin, the forecasting model to be mis-specified over the sample period, the parameters of the model to be estimated (possibly inconsistently) from data which might be measured with error, the forecasts to commence from incorrect initial conditions, and innovation errors to cumulate over the forecast horizon. The taxonomy reveals the central role of unanticipated location shifts, and helps explain the outcomes of forecasting competitions. Other potential sources of forecast failure seem less relevant. Regime-shift non-stationarity can also be removed by co-breaking (the cancellation of breaks across linear combinations of variables). Corrections to reduce forecast-error biases (intercept and forecast-error corrections) help robustify forecasts in the face of location shifts. Forecast pooling and factor-forecasts are noted, but the recommended procedure after forecast failure is to difference a congruent and encompassing empirical model selected in-sample by Autometrics. Attempts to forecast breaks and during breaks will be described, as will recent developments in Nowcasting and Forediction.

Lectures will be illustrated by empirical modelling exercises using PcGive where participants undertake the computing for empirical modelling and forecasting using Autometrics, a procedure for automatic model selection embedded in PcGive/OxMetrics. An introductory session on OxMetrics, a modular system for the econometric and statistical analysis of economic, financial and marketing data, will be presented on the first day.

Return to menu

Day 1

Session 1: Introduction to Economic Forecasting

Economic forecasting occurs in a non-stationary and evolving world, when the model and mechanism differ. The framework and basic concepts are sketched. A taxonomy of forecast errors is explained, allowing for structural change in the forecast period, the model and DGP to differ over the sample period, the parameters of the model to be estimated (possibly inconsistently) from the data, and the forecasts to commence from incorrectly measured initial conditions. This reveals the central role of shifts in coefficients of deterministic terms, called location shifts, and helps explain the outcomes of forecasting competitions.

Session 2: Introduction to OxMetrics

In this applied session we will introduce OxMetrics (data input, transformation, graphics, modules and recording results) and PcGive, the basic modeling tool, including model formulation, selection, and evaluation. This session will also explore the forecasting tools available in the software, including graphical and statistical output.

Thursday, 9 August 2018

Session 3: Forecasting Problems

We examine the main sources of forecast failure using artificial data in a series of examples to highlight the results. PcNaive, a software package within OxMetrics will be introduced to generate the forecasting examples explored. A range of parameter changes in integrated-cointegrated, I(1), time series are hardly reflected in econometric models thereof: zero-mean shifts are not easily detected by conventional constancy tests. The breaks in question are changes that leave the unconditional expectations of the I(0) components unaltered. Thus, dynamics, adjustment speeds etc. may alter with a low chance of detection. However, shifts in long-run means are generally noticeable. We’ll draw important implications for the choice of forecasting device.

Return to menu

Day 2

Session 4: Selecting Forecasting Models

We discuss the econometric theory that explains the remarkable properties of automatic selection. Three stages are considered:

  1. Specification of the General Unrestricted Model (GUM), distinguishing theory-based variables and other candidates;
  2. Mis-specification testing of the first feasible GUM;
  3. Selection of a specific model by multi-path searches retaining theory variables.

Autometrics will be explained and applied in this hands-on session, including impulse-indicator saturation (IIS) and its invaluable role in removing outliers and generalizations to step-indicator saturation (SIS) for detecting breaks. Methods for handling more candidate variables than observations are show, leading to empirical model discovery.

Session 5: Foundations of Unpredictability

Six aspects of the role of unpredictability in forecasting are distinguished, compounding four additional mistakes likely when estimating forecasting models. Many of the famous theorems of economic forecasting do not hold in a non-stationary and evolving world, when the model and mechanism differ; rather their converses often do. Equilibrium-correction models are shown to be a risky device from which to forecast. Potential explanations for the intermittent occurrence of forecast failure include poor models, inaccurate data, inadequate methodology, mis-calculation of uncertainty, structural change, overparameterization, incorrect estimators, and inappropriate variables. In fact, most of these can be shown not to explain forecast failure, and the forecast-error taxonomy shows that forecast failure depends primarily on forecast-period events, particularly location shifts. The role of model selection versus averaging is noted as part of a ‘factor forecasting’ approach.

Session 6: Robustifying Forecasts

In this practical session we shall explore one method of robustifying forecasts to location shifts. Differencing lowers the polynomial degree of deterministic terms: double differencing usually leads to a mean-zero, trend-free series, as continuous acceleration is rare in economics (except perhaps during hyperinflations or major technological shifts). The impact on forecast performance is traced. A new explanation for the empirical success of second differencing is proposed. Differencing is shown to have merit in the face of location shifts, and has been used in policy decisions in the UK concerning TV advertising.

Forecasting will be conducted for several model variants, with and without forecast failure. The role of parameter estimation uncertainty is considered. The practical role of forecast-error corrections will be investigated, and many theoretical issues illustrated through both successful and unsuccessful forecasting, including how to cope with location shifts. Examples will include Japanese Export forecasts and UK GDP forecasts. Autometrics will be used to select the forecasting models.

Return to menu

Day 3

Session 7: Solutions to forecast failure

Having explored differencing in the previous session, this session will consider a range of potential solutions. Six central possibilities are forecasting the break or during it, differencing or smoothed differencing, co-breaking, intercept corrections, rapid updating, and pooling. We note recent research on forecasting breaks, and the demanding conditions under which that might be possible, as well as learning about breaks during transitions.

The possible roles of parsimony and collinearity in forecasting highlight the potential importance of excluding irrelevant, but changing, effects. While intercept corrections help robustify forecasts against biases due to location shifts, they are ineffective for measurement errors: conversely EWMA corrections are excellent for measurement errors, but not breaks. Rapid updating is related both to moving windows and to forecasting breaks, with some properties that can help alleviate failure.

Forecast pooling can also sometimes help, but needs to be combined with model selection to exclude really bad forecasting devices. Then, pooling can lead to improved forecasts over the best of a set of devices in a world of mis-specified models and location shifts. However, care is needed in selecting what enters the pool, and indiscriminate pooling (as in Bayesian model averaging) can be counter-productive.

Session 8: Introduction to Gets in R and discussion

In this final session we shall introduce the gets package in R as well as discuss any questions. R is a free software environment for statistical computing and graphics, and gets is a package that enables general-to-specific modelling of the mean and variance of a regression, indicator saturation methods for detecting structural breaks in the mean, and forecasting.

Return to menu



  • Clements, Michael P. and Hendry, David F. (1998) Forecasting Economic Time Series. Cambridge: Cambridge University Press.
  • Clements, M.P. and Hendry, D.F. (2002), ‘An Overview of Economic Forecasting’, Chapter 1 in Clements, M.P. and Hendry, D.F. (eds.) A Companion to Economic Forecasting. Oxford: Blackwells.
  • Castle, Jennifer L., Clements, M.P. and Hendry, D.F. (2017), ‘An Overview of Forecasting Facing
  • Breaks’, Journal of Business Cycle Research, 12, 3–23.
  • Ericsson, Neil R. (2017), ‘Economic Forecasting in Theory and Practice: An Interview with David F.Hendry’, International Journal of Forecasting, 33, 2, 523–542.


  • Castle, J.L., Fawcett, Nicholas W.P. and Hendry, D.F. (2010), ‘Forecasting with equilibrium-correction
  • models during structural breaks’, Journal of Econometrics, 158(1), 25–36.
  • Castle, J.L., Fawcett, N.W.P. and Hendry, D.F. (2011), ‘Forecasting Breaks and Forecasting During Breaks’, Chapter 11 in Clements, M.P. and Hendry, D.F. (eds.) Oxford Handbook of Economic Forecasting, Oxford: Oxford University Press.
  • Castle, J.L., Clements, M.P. and Hendry, D.F. (2017), ‘Robust Approaches to Forecasting’, International Journal of Forecasting, 31, 99–112.
  • Forni, M., Hallin, M., Lippi, M., and Reichlin, L. (2000), ‘The Generalized Factor Model: Identification and Estimation’, The Review of Economics and Statistics, 82, 540–554.
  • Stock, James H. and Watson, Mark W. (2002), ‘Macroeconomic Forecasting using Diffusion Indices’. Journal of Business and Economic Statistics, 20, 147–162.
  • Castle, J.L., Clements, M.P. and Hendry, D.F. (2013), ‘Forecasting by Factors, by Variables, by Both, or Neither’, Journal of Econometrics, 177(2), 305–319.
  • Castle, J.L., Hendry, D.F. and Martinez, Andrew B. (2017), ‘Evaluating Forecasts, Narratives and Policy using a Test of Invariance’, Econometrics, 5(39). DOI:10.3390/econometrics5030039.


  • Doornik, Jurgen A. (2009), ‘Autometrics’, 88–121 in Castle, J. L. and Shephard, N. (eds.), The Methodology and Practice of Econometrics. Oxford: Oxford University Press.
  • Johansen, Søren and Nielsen, Bent (2009), ‘An analysis of the indicator saturation estimator as a robust regression estimator’, 1–36 in Castle and Shephard op cit
  • Castle, J.L., Doornik, J.A. and Hendry, D.F. (2012), ‘Model Selection when there are Multiple Breaks’, Journal of Econometrics, 169, 239–246
  • Hendry, D.F. and Doornik, J.A. (2014) Empirical Model Discovery and Theory Evaluation, MIT Press Cambridge, Mass.


  • Doornik, J.A. (2013) An Introduction to OxMetrics 7. London: Timberlake Consultants Press.
  • Doornik, J.A. and Hendry, D.F. (2013) Empirical Econometric Modelling Using PcGive: Volume I.
  • London: Timberlake Consultants Press. Sucarrat, G., Pretis, F. and Reade, J.J. (2017) gets: General-to-Specific (GETS) Modelling and Indicator Saturation Methods, R Package Version 0.12. Pretis, F., Reade,
  • J.J. and Sucarrat, G. (2016) “General-to-Specific (GETS) Modelling and Indicator Saturation with the R Package gets (version 0.7).”, Working Paper 794, Department of Economics, University of Oxford, and forthcoming in the Journal of Statistical Software
  • Return to menu

    Knowledge of basic econometric concepts and regressions analysis at the undergraduate level is the minimum requirement, as in David F. Hendry and Bent Nielsen (2007) Econometric Modelling: a Likelihood Approach, Princeton University Press. Previous work experience with econometrics would be advantageous. Knowledge of the econometric software package OxMetrics is not required. Participants should install OxMetrics, R and the gets package before the course. Licences will be provided for OxMetrics; and once R has been installed typing the command install.packages(“gets”) will achieve the third (perhaps download the RStudio package as a somewhat more user friendly interface for using R).

    All prices exclude VAT or local taxes where applicable.

    * Required Fields

    - +
    Post your comment

    Timberlake Consultants