Subject: Econometric modelling of climate and related variables
The course provides an introduction to the theory and practice of econometric modelling of climate variables in a non-stationary world. It covers the modelling methodology, implementation, practice and evaluation of climate-economic models.
The framework, its basic concepts and implications will be explained for modelling evolving processes that are also subject intermittently to outliers and structural breaks. Live applications to empirical climate time series will demonstrate the approach.
Who should attend: This course is aimed at anyone modelling climate time-series data who wants to get up-to-date with major recent developments in empirical econometric modelling.
Learning outcomes: Develop skills in selecting econometric models for a range of climate and related variable, producing and evaluating empirical models and handling evolving time series exhibiting trends, outliers and sudden shifts. Exposure to the powerful econometric software package XLModeler running within Excel will be provided to achieve this.
Delivery style: The course is applied, combining theory with practical sessions.
Climate econometrics is a new sub-discipline that has grown rapidly over the last few years. Because greenhouse gas emissions like carbon dioxide (CO2), nitrous oxide (N2O) and methane (CH4) that are generated by human economic activity are a major cause of climate change, it is not surprising that the tool set designed to empirically investigate economic outcomes should be applicable to studying many empirical aspects of climate change.
Economic and climate data exhibit many commonalities. Both are subject to non- stationarities in the form of slowly evolving trends and sudden shifts. Consequently, the well-developed machinery for modelling economic time series can be fruitfully applied to climate data.
In both disciplines, there is imperfect and incomplete knowledge of the processes actually generating the data. As that data generating process (DGP) is not known, investigators must search for what they hope is a close approximation to it. The approach adopted at Climate Econometrics (http://www.climateeconometrics.org/) is based on a model selection methodology that has excellent properties for locating an unknown DGP nested within a large set of possible explanations, allowing for dynamic interactions and feedbacks, outliers from shocks and measurement errors, stochastic trends, sudden, often unanticipated shifts, and non-linear relationships.
Our widely used software is a variant of machine learning which undertakes multi-path block searches commencing from very general specifications to discover a well-specified and undominated model of the processes under analysis while retaining and evaluating all the available theoretical insights. To do so requires implementing indicator- saturation estimators designed to match the problems faced, such as outliers, location shifts, trend breaks, parameter changes, and more complex phenomena that have a common reaction `shape' like the impacts of volcanic eruptions on temperature reconstructions.
As well as describing these econometric tools, we also consider the relevant climate science to provide the background to the later applications. By noting the Earth's limited available atmosphere and water resources, we establish that humanity really can alter the climate, and is doing so in myriad ways. Then we relate past climate changes to the `great extinctions' seen in the geological record. Following the Industrial Revolution in the mid-18th Century, real income levels per capita have risen dramatically, many killer diseases have been tamed, and longevity has approximately doubled. However, such beneficial developments have led to a global explosion in anthropogenic emissions of greenhouse gases. Such emissions are also subject to many relatively sudden shifts from major wars, crises, resource discoveries, technological innovations and policy interventions.
Consequently, stochastic trends, large shifts and numerous outliers must all be handled in practice to develop viable empirical models of climate phenomena. A further advantage of our econometric methods is to objectively detect the impacts of important policy interventions. The econometric approach can handle all these jointly, which is essential to accurately characterizing non-stationary observational data. Few approaches in either climate or economic modelling consider all such effects jointly, but a failure to do so leads to mis-specified models and hence incorrect theory evaluation and policy analyses.
Applications of these methods are illustrated by empirical modelling exercises. The first investigates past climate variability over the Ice Ages, where a simultaneous-equations system is developed to characterize land ice volume, temperature and atmospheric CO2 levels as non-linear functions of measures of the Earth's orbital path round the Sun. The second analyses the United Kingdom's highly non-stationary annual CO2 emissions over the last 150 years, walking through all the key modelling stages. As the first country into the Industrial Revolution, the UK is one of the first countries out, with per capita annual CO2 emissions now below 1860's levels when our data series begin, a reduction achieved with little aggregate cost. However, very large decreases in all greenhouse gas emissions are required to meet the UK's 2050 target set by its Climate Change Act in 2008 of an 80% reduction from 1970 levels, since changed to a net zero target as required globally to stabilize temperatures. Other empirical studies to round out the illustrations are more briefly noted: modelling monthly changes in global CO2 concentrations to investigate the role of anthropogenic emissions; mapping energy-balance models to a cointegration representation of the observations; a global panel data analysis of the costs in GDP growth of 2°C warming over 1.5°C modelled with indicator saturation; modelling the impacts of volcanic eruptions on temperature reconstructions; and establishing the effects of forecast uncertainty on hurricane damages.
Lectures will be illustrated by empirical modelling exercises using XLModeler, a system for the econometric and statistical analysis of non-stationary data running within Excel. Participants will undertake the computing for empirical modelling using the module Autometrics, a procedure for automatic model selection embedded in XLModeler. An introductory session on XLModeler, will be presented on the first day.
|Time||Session / Description|
|09:00-09:20||Arrival & Registration|
|15:15-15:30||Tea/coffee break (Feedback Session)|
Knowledge of basic econometric concepts and regression analysis at the undergraduate level is the minimum requirement, at least as in David F. Hendry and Bent Nielsen (2007) Econometric Modelling: A Likelihood Approach, Princeton University Press.
Some previous experience with econometrics would be advantageous. Knowledge of the XLModeler software is not required but participants should install XLModeler before the course using licences provided by Timberlake.
The number of delegates is restricted. Please register early to guarantee your place.