World renowned Econometricians from the University of Oxford, Prof. Sir David F. Hendry, Dr. Jennifer Castle and Dr. Jurgen A. Doornik deliver this short course, taking place at Nuffield College, Oxford and is aimed at economists and applied econometricians who work with time-series data and want to keep up-to-date with some major recent developments in applied econometric modelling for forecasting.Learning outcomes:
The course is applied. Lectures will combine theory and practical sessions.
The course provides an introduction to the theory and practice of econometric modelling and forecasting facing a nonstationary and evolving world, when the model differs from the data generation process (DGP). It covers the modelling methodology, practice, implementation and evaluation of economic models and their forecasts, focusing on the main sources of forecast error leading to the possible robustification of forecasts. The framework, its basic concepts and implications will be explained for integrated-cointegrated processes intermittently subject to outliers and structural breaks. Live applications to empirical time series will demonstrate the approach.
The course concerns econometric modelling and forecasting in a non-stationary and evolving world, when the model and mechanism differ. The main model class is a vector autoregression in integrated-cointegrated variables leading to an equilibrium-correction system, but intermittently subject to outliers and structural breaks. A combined theory and data approach to modelling economic phenomena is outlined, using automated general-to specific methods to select models. The course will outline the remarkable properties of the proposed approach, demonstrating how to handle more variables than observations, outliers or location shifts at any unknown point in the sample, and unknown functional forms.
The course will also develop an understanding of why forecasts are accurate or not, using macro-economic examples to motivate the analysis. A taxonomy of forecast errors will be used, focusing on practical issues that confront most forecasters, including allowing for structural change at the forecast origin, the forecasting model to be mis-specified over the sample period, the parameters of the model to be estimated (possibly inconsistently) from data which might be measured with error, the forecasts to commence from incorrect initial conditions, and innovation errors to cumulate over the forecast horizon. The taxonomy reveals the central role of unanticipated location shifts which helps explain the outcomes of forecasting competitions. Other potential sources of forecast failure seem less relevant. Regime-shift non-stationarity can also be removed by co-breaking (the cancellation of breaks across linear combinations of variables). Corrections to reduce forecast-error biases (intercept and forecast-error corrections) help robustify forecasts following location shifts. Forecast pooling and factor-forecasts are noted, but the recommended procedure after forecast failure is to difference a congruent and encompassing empirical model selected in-sample by Autometrics. Attempts to forecast breaks and during breaks will be described, as will recent developments in Nowcasting and Forediction.
Lectures will be illustrated by empirical modelling exercises using PcGive where participants under- take the computing for empirical modelling and forecasting using Autometrics, a procedure for automatic model selection embedded in PcGive/OxMetrics. An introductory session on OxMetrics, a modular system for the econometric and statistical analysis of economic, financial and marketing data, will be presented on the first day. Software for the duration of the course will be kindly provided by Timberlake Consultants.
The framework for economic modelling and forecasting, its basic concepts and main implications will be sketched. The theory of reduction underpins economic modelling: Models with no losses on reduction are congruent; those that explain rival models are encompassing. The main reductions correspond to key econometrics concepts (causality, exogeneity, invariance, etc.), and are the null hypotheses of model- evaluation tests, sustained by a taxonomy of evaluation information. Congruent and encompassing sub- models can, therefore, be justified, motivating the question ‘how should they be selected’? The key problems in forecasting are also highlighted, emphasizing the distinction between determining economic relationships or testing theories and forecasting. Good models may forecast badly and bad models can forecast well – a feature that will be explored throughout the course.
In this applied session we will introduce OxMetrics (data input, transformation, graphics, modules and recording results) and PcGive, the basic modeling tool, including model formulation, selection, and evaluation. This session will also explore the forecasting tools available in the software, including graphical and statistical output. Various applications will illustrate the software.
This is a hands-on session where Autometrics will be explained. Computer automation of selection algorithms has revealed high success rates and allows operational studies of alternative strategies. Theoretical and practical developments to Autometrics are explained, and we consider its performance across different states of nature (unknown to the empirical investigator). The properties of model selection will be discussed by way of a class Monte Carlo experiment, in which each participant generates a draw of data from a DGP using PcNaive, and we compare the retention of relevant and irrelevant variables to the theory predictions, contrasting the results with the notion of ‘size’. Methods for handling more candidate variables than observations are shown, leading to empirical model discovery.
Saturation techniques will be explained, including impulse-indicator saturation (IIS) and its invaluable role in removing outliers and generalizations to step-indicator saturation (SIS) for detecting location shifts, multiplicative indicator saturation (MIS) to test for structural breaks in model parameters at unknown points in time, and designer indicator saturation in which break shapes are designed to detect regular shift patterns. The theory of saturation will be explained, where under the null of no outliers or shifts there is almost no loss of efficiency in testing for T indicators when the nominal significance level, α ≤ 1/T, even in dynamic models. In this hands-on session we shall apply saturation to datasets under both the null and alternative, revealing the importance of modelling in-sample breaks and shifts for forecast performance.
We examine the main sources of forecast failure using artificial data in a series of examples to highlight the results. PcNaive, a software package within OxMetrics will be introduced to generate the forecasting examples explored. A range of parameter changes in integrated-cointegrated, I(1), time series are hardly reflected in econometric models thereof: zero-mean shifts are not easily detected by conventional constancy tests. The breaks in question are changes that leave the unconditional expectations of the I(0) components unaltered, although MIS could be successful. Nevertheless, dynamics, adjustment speeds etc. may alter with a low chance of detection. However, shifts in long-run means are generally noticeable. We’ll draw implications for the choice of forecasting device.
Six aspects of the role of unpredictability in forecasting are distinguished, compounding four additional mistakes likely when estimating forecasting models. Many of the famous theorems of economic forecasting do not hold in a non-stationary and evolving world, when the model and mechanism differ; rather their converses often do. Equilibrium-correction models are shown to be a risky device from which to forecast. Potential explanations for the intermittent occurrence of forecast failure include poor models, inaccurate data, inadequate methodology, mis-calculation of uncertainty, structural change, over-parameterization, incorrect estimators, and inappropriate variables. In fact, using a simplified taxonomy of forecast errors, most of these can be shown not to explain forecast failure, and the forecast-error taxonomy shows that forecast failure depends primarily on forecast-period events, particularly location shifts.
There are a range of potential solutions to forecast failure. Six central possibilities are forecasting the break or during it, differencing or smoothed differencing, co-breaking, intercept corrections, rapid up- dating, and pooling. In this practical session we shall explore one method of robustifying forecasts to location shifts. Differencing lowers the polynomial degree of deterministic terms: double differencing usually leads to a mean-zero, trend-free series, as continuous acceleration is rare in economics (except perhaps during hyperinflations or major technological shifts). The impact on forecast performance is traced. A new explanation for the empirical success of second differencing is proposed. Differencing is shown to have merit in the face of location shifts and has been used in policy decisions in the UK concerning TV advertising.
Forecasting will be conducted for several model variants, with and without forecast failure. The role of parameter estimation uncertainty is considered. The practical role of forecast-error corrections will be investigated, and many theoretical issues illustrated through both successful and unsuccessful forecasting, including how to cope with location shifts. Examples will include Japanese Export forecasts and UK GDP forecasts. Autometrics will be used to select the forecasting models.
In this session we shall illustrate a practical modelling and forecasting exercise, using a dataset on the UK’s CO2 emissions. First, a forecast-error taxonomy is discussed when there are unmodeled variables, forecast ‘off-line’. The taxonomy highlights the importance of shifts in means of the unmodeled variables which can induce forecast failure. Using the open model taxonomy to explain forecasting results, we then undertake a practical application. A single equation analysis is first explored in which a number of steps are taken to produce conditional forecasts:
The VAR is then constructed to obtain unconditional system forecasts and the importance of step indicator saturation is observed.
We consider the M4 competition and its implications, then turn to approaches to better estimating the forecast origin values, call nowcasting. Finally, we consider how to evaluate policy based on forecasting where a narrative describing the forecasts (a forediction) is used to justify a policy action.
Knowledge of basic econometric concepts and regressions analysis at the undergraduate level is the minimum requirement, as in David F. Hendry and Bent Nielsen (2007) Econometric Modelling: A Likelihood Approach, Princeton University Press and David F. Hendry (2015) Introductory Macro-econometrics: A New Approach.
Previous work experience with econometrics would be advantageous. Knowledge of the econometric software package OxMetrics is not required. Participants should install OxMetrics before the course. Licences will be provided for OxMetrics.
The number of delegates is restricted. Please register early to guarantee your place.