Online Program Home
My Program

Abstract Details

Activity Number: 22 - Testing and Evaluation of High-Dimensional Models
Type: Topic Contributed
Date/Time: Sunday, July 28, 2019 : 2:00 PM to 3:50 PM
Sponsor: Section on Bayesian Statistical Science
Abstract #304664
Title: Comparing and Combining Forecast Distributions Having Different Dimensions
Author(s): Catherine Forbes*
Companies: Monash University
Keywords: forecast evaluation; score function; prediction distributions; marginal likelihood; model selection; model averaging
Abstract:

Consider the following competitive scenario. Modeller One (M1) produces a univariate forecast distribution for a target future observation using a simple model depending only on the historical trajectory of the univariate target series. Modeller Two (M2) produces a multivariate forecast distribution, where one of the variables is the same target variable. Who produces the best forecasts? How are we to assess the forecast distributions produced by the competitors?

This paper first discusses situations where the competitive scenario reasonably occurs, and why a naïve comparisons may be problematic. We offer both theoretical and practical suggestions for the construction of appropriate scoring rules to compare the performance of the resulting forecast distributions. Finally, we use the insights gained to combine the competing forecasts into a single target forecast distribution, in case M1 and M2 decide that they want to collaborate.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2019 program