The course provides an introduction to the theory and practice of economic forecasting facing a nonstationary and evolving world, when the model differs from the data generation process (DGP). It covers the modelling methodology, practice, implementation and evaluation of economic forecasting, the main sources of forecast error leading to the possible robustification of forecasts. The framework, its basic concepts and implications will be explained for integrated-cointegrated processes intermittently subject to outliers and structural breaks. Live applications to empirical time series will demonstrate the approach.
This course is aimed at economists and applied econometricians who work with time-series data and want to keep up-to-date with some major recent developments in applied econometric modelling for forecasting.Learning outcomes:
The course concerns the theory and practice of econometric modelling and forecasting in a non-stationary and evolving world, when the model and mechanism differ. The main model class is a vector autoregression in integrated-cointegrated variables leading to an equilibrium-correction system, but intermittently subject to structural breaks. A combined theory and data approach to modelling economic phenomena is outlined, using automated general-to specific methods to select models. The course will outline the remarkable properties of the proposed approach, demonstrating how to handle more variables than observations, outliers or location shifts at any unknown point in the sample, and unknown functional forms.
The course will develop an understanding of why forecasts are accurate or not, using mostly macroeconomic examples to motivate the analysis. A taxonomy of forecast errors will be used, focusing on practical issues that confront most forecasters, including allowing for structural change at the forecast origin, the forecasting model to be mis-specified over the sample period, the parameters of the model to be estimated (possibly inconsistently) from data which might be measured with error, the forecasts to commence from incorrect initial conditions, and innovation errors to cumulate over the forecast horizon. The taxonomy reveals the central role of unanticipated location shifts, and helps explain the outcomes of forecasting competitions. Other potential sources of forecast failure seem less relevant. Regime-shift non-stationarity can also be removed by co-breaking (the cancellation of breaks across linear combinations of variables). Corrections to reduce forecast-error biases (intercept and forecast-error corrections) help robustify forecasts in the face of location shifts. Forecast pooling and factor-forecasts are noted, but the recommended procedure after forecast failure is to difference a congruent and encompassing empirical model selected in-sample by Autometrics. Attempts to forecast breaks and during breaks will be described, as will recent developments in Nowcasting and Forediction.
Lectures will be illustrated by empirical modelling exercises using PcGive where participants undertake the computing for empirical modelling and forecasting using Autometrics, a procedure for automatic model selection embedded in PcGive/OxMetrics. An introductory session on OxMetrics, a modular system for the econometric and statistical analysis of economic, financial and marketing data, will be presented on the first day. Software for the duration of the course will be kindly provided by Timberlake Consultants, and R can be downloaded for free.
The framework for economic modelling and forecasting, its basic concepts and main implications will be sketched. The theory of reduction underpins economic modelling: Models with no losses on reduction are congruent; those that explain rival models are encompassing. The main reductions correspond to key econometrics concepts (causality, exogeneity, invariance, etc.), and are the null hypotheses of modelevaluation tests, sustained by a taxonomy of evaluation information. Congruent and encompassing submodels can, therefore, be justified, motivating the question ‘how should they be selected’? The key problems in forecasting are also highlighted, emphasizing the distinction between determining economic relationships or testing theories and forecasting. Good models may forecast badly and bad models can forecast well – a concept that will be explored throughout the course.
In this applied session we will introduce OxMetrics (data input, transformation, graphics, modules and recording results) and PcGive, the basic modeling tool, including model formulation, selection, and evaluation. This session will also explore the forecasting tools available in the software, including graphical and statistical output. Various applications will illustrate the software.
Model selection theory poses great difficulties: all statistics for selecting models and evaluating their specifications have distributions, usually interdependent, and possibly altered by every modeling decision. A range of approaches to select models are discussed, highlighting problems with approaches that only search one path. General-to-specific (Gets) modeling will be described, emphasizing automatic procedures. Gets mimics reduction by simplifying a congruent general unrestricted model (GUM) to a dominant minimal representation.
This is a hands-on sesson where Autometrics will be explained. Computer automation of selection algorithms has revealed high success rates, and allows operational studies of alternative strategies. Theoretical and practical developments to Autometrics are explained, and we consider its performance across different states of nature (unknown to the empirical investigator). The properties of model selection will be discussed by way of a class Monte Carlo experiment, in which each participant generates a draw of data from a DGP using PcNaive, and we compare the retention of relevant and irrelevant variables to the theory predictions, contrasting the results with the notion of ‘size’. Methods for handling more candidate variables than observations are shown, leading to empirical model discovery.
Saturation techniques will be explained, including impulse-indicator saturation (IIS) and its invaluable role in removing outliers and generalizations to step-indicator saturation (SIS) for detecting breaks, multiplicative indicator saturation (MIS) to test for structural breaks in model parameters at unknown points in time, and designer indicator saturation in which break shapes are designed to detect regular shift patterns. The theory of saturation will be explained, where under the null of no outliers or shifts there is almost no loss of efficiency in testing for T indicators when 1=T , even in dynamic models. In this hands-on session we shall apply saturation to datasets under both the null and alternative, revealing the importance of modelling in-sample breaks and shifts for forecast performance.
We examine the main sources of forecast failure using artifical data in a series of examples to highlight the results. PcNaive, a software package within OxMetrics will be introduced to generate the forecasting examples explored. A range of parameter changes in integrated-cointegrated, I(1), time series are hardly reflected in econometric models thereof: zero-mean shifts are not easily detected by conventional constancy tests. The breaks in question are changes that leave the unconditional expectations of the I(0) components unaltered. Thus, dynamics, adjustment speeds etc. may alter with a low chance of detection. However, shifts in long-run means are generally noticeable. We’ll draw important implications for the choice of forecasting device.
Six aspects of the role of unpredictability in forecasting are distinguished, compounding four additional mistakes likely when estimating forecasting models. Many of the famous theorems of economic forecasting do not hold in a non-stationary and evolving world, when the model and mechanism differ; rather their converses often do. Equilibrium-correction models are shown to be a risky device from which to forecast. Potential explanations for the intermittent occurrence of forecast failure include poor models, inaccurate data, inadequate methodology, mis-calculation of uncertainty, structural change, overparameterization, incorrect estimators, and inappropriate variables. In fact, using a simplified taxonomy of forecast errors, most of these can be shown not to explain forecast failure, and the forecast-error taxonomy shows that forecast failure depends primarily on forecast-period events, particularly location shifts.
There are a range of potential solutions to forecast failure. Six central possibilities are forecasting the break or during it, differencing or smoothed differencing, co-breaking, intercept corrections, rapid updating, and pooling. In this practical session we shall explore one method of robustifying forecasts to location shifts. Differencing lowers the polynomial degree of deterministic terms: double differencing usually leads to a mean-zero, trend-free series, as continuous acceleration is rare in economics (except perhaps during hyperinflations or major technological shifts). The impact on forecast performance is traced. A new explanation for the empirical success of second differencing is proposed. Differencing is shown to have merit in the face of location shifts, and has been used in policy decisions in the UK concerning TV advertising. Forecasting will be conducted for several model variants, with and without forecast failure. The role of parameter estimation uncertainty is considered. The practical role of forecast-error corrections will be investigated, and many theoretical issues illustrated through both successful and unsuccessful forecasting, including how to cope with location shifts. Examples will include Japanese Export forecasts and UK GDP forecasts. Autometrics will be used to select the forecasting models.
In this session we shall undertake a practical modelling and forecasting exercise, using a dataset on the UK’s CO2 emissions. First, a forecast-error taxonomy is discussed when there are unmodelled variables, forecast ‘off-line’. The taxonomy highlights the importance of shifts in means of the unmodelled variables which can induce forecast failure. Using the open model taxonomy to explain forecasting results, we then undertake a practical application. A single equation analysis is first explored in which a number of steps are taken to produce conditional forecasts:
In this final session we shall introduce the gets package in R as well as discuss any questions. R is a free software environment for statistical computing and graphics, and gets is a package that enables general-to-specific modelling of the mean and variance of a regression, indicator saturation methods for detecting structural breaks in the mean, and forecasting.
Clements, Michael P. and Hendry, David F. (1998) Forecasting Economic Time Series. Cambridge: Cambridge University Press. Clements, M.P. and Hendry, D.F. (2002), ‘An Overview of Economic Forecasting’, Chapter 1 in Clements, M.P. and Hendry, D.F. (eds.) A Companion to Economic Forecasting. Oxford: Blackwells. Castle, Jennifer L., Clements, M.P. and Hendry, D.F. (2017), ‘An Overview of Forecasting Facing Breaks’, Journal of Business Cycle Research, 12, 3–23. https://www.economics.ox.ac.uk/materials/papers/14384/paper-779.pdf Ericsson, Neil R. (2017), ‘Economic Forecasting in Theory and Practice: An Interview with David F. Hendry’, International Journal of Forecasting, 33, 2, 523–542. https://www.federalreserve.gov/econresdata/ifdp/2016/files/ifdp1184.pdf Hendry, D.F. (2015), Introductory Macro-econometrics: A New Approach, Timberlake Consultants Press. http://www.timberlake.co.uk/macroeconometrics.html
Castle, J.L., Fawcett, Nicholas W.P. and Hendry, D.F. (2010), ‘Forecasting with equilibrium-correction models during structural breaks’, Journal of Econometrics, 158(1), 25–36. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/forecasting-with-equilibrium-correction-models-during-structural-breaks Castle, J.L., Fawcett, N.W.P. and Hendry, D.F. (2011), ‘Forecasting Breaks and Forecasting During Breaks’, Chapter 11 in Clements, M.P. and Hendry, D.F. (eds.) Oxford Handbook of Economic Forecasting, Oxford: Oxford University Press. Castle, J.L., Clements, M.P. and Hendry, D.F. (2017), ‘Robust Approaches to Forecasting’, International Journal of Forecasting, 31, 99–112. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/robust-approaches-to-forecasting Forni, M., Hallin, M., Lippi, M., and Reichlin, L. (2000), ‘The Generalized Factor Model: Identification and Estimation’, The Review of Economics and Statistics, 82, 540–554. Stock, James H. and Watson, Mark W. (2002), ‘Macroeconomic Forecasting using Diffusion Indices’. Journal of Business and Economic Statistics, 20, 147–162. Castle, J.L., Clements, M.P. and Hendry, D.F. (2013), ‘Forecasting by Factors, by Variables, by Both, or Neither’, Journal of Econometrics, 177(2), 305–319. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/forecasting-by-factors-by-variables-or-both Castle, J.L., Hendry, D.F. and Martinez, Andrew B. (2017), ‘Evaluating Forecasts, Narratives and Policy using a Test of Invariance’, Econometrics, 5(39). DOI:10.3390/econometrics5030039. http://www.mdpi.com/2225-1146/5/3/39
Doornik, Jurgen A. (2009), ‘Autometrics’, 88–121 in Castle, J. L. and Shephard, N. (eds.), The Methodology and Practice of Econometrics. Oxford: Oxford University Press. Johansen, Søren and Nielsen, Bent (2009), ‘An analysis of the indicator saturation estimator as a robust regression estimator’, 1–36 in Castle and Shephard op cit. Castle, J.L., Doornik, J.A. and Hendry, D.F. (2012), ‘Model Selection when there are Multiple Breaks’, Journal of Econometrics, 169, 239–246. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/model-selection-when-there-are-multiple-breaks Hendry, D.F. and Doornik, J.A. (2014) Empirical Model Discovery and Theory Evaluation, MIT Press Cambridge, Mass. Hendry, D.F. (2017), ‘Deciding between Alternative Approaches in Macroeconomics’, International Journal of Forecasting, 34, 119–135, with ‘Response to the Discussants’, 142–146.
Doornik, J.A. (2013) An Introduction to OxMetrics 7. London: Timberlake Consultants Press. Doornik, J.A. and Hendry, D.F. (2013) Empirical Econometric Modelling Using PcGive: Volume I. London: Timberlake Consultants Press. Sucarrat, Genaro, Pretis, Felix and Reade, J. James (2017) gets: General-to-Specific (GETS) Modelling and Indicator Saturation Methods, R Package Version 0.12. https://rdrr.io/cran/gets/ Pretis, F., Reade, J.J. and Sucarrat, G. (2016) ‘General-to-Specific (GETS) Modelling and Indicator Saturation with the R Package gets (version 0.7).’ Forthcoming in the Journal of Statistical Software. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/general-to-specific-gets-modelling-and-indicator-saturation-with-the-r-package-gets
Knowledge of basic econometric concepts and regressions analysis at the undergraduate level is the minimum requirement, as in David F. Hendry and Bent Nielsen (2007) Econometric Modelling: a Likelihood Approach, Princeton University Press and David F. Hendry (2015) Introductory Macro-econometrics: A New Approach, http://www.timberlake.co.uk/macroeconometrics.html. Previous work experience with econometrics would be advantageous. Knowledge of the econometric software package OxMetrics is not required. Participants should install OxMetrics, R and the gets package before the course. Licences will be provided for OxMetrics; and once R has been installed typing the command install.packages(“gets”) will achieve the third (perhaps download the RStudio package as a somewhat more user friendly interface for using R).