Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 573 - Time Series:Testing and Forecasting
Type: Topic Contributed
Date/Time: Thursday, August 6, 2020 : 3:00 PM to 4:50 PM
Sponsor: Business and Economic Statistics Section
Abstract #313595
Title: Evaluating Government Budget Forecasts
Author(s): Neil Ericsson* and Andrew Martinez
Companies: Federal Reserve Board and U.S. Department of the Treasury
Keywords: big data; evaluation; forecasts; machine learning; MSFE; saturation
Abstract:

This paper reviews the literature on the evaluation of government budget forecasts, outlines a generic framework for forecast evaluation, and illustrates forecast evaluation with empirical analyses of different U.S. government agencies' forecasts of U.S. federal debt. Techniques for forecast evaluation include comparison of mean squared forecast errors, forecast encompassing, tests of predictive failure, and tests of bias and efficiency. Recent extensions of these techniques utilize machine-learning algorithms to handle more potential regressors than observations, a characteristic common to big data. These techniques are generally applicable, including to forecasts of components of the government budget, to forecasts of budgets from municipal, state, provincial, and national governments, and to other economic and non-economic forecasts. Evaluation of forecasts is fundamental to assessing the forecasts' usefulness; and evaluation can indicate ways in which the forecasts may be improved.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program