Consider the following competitive scenario. Modeller One (M1) produces a univariate forecast distribution for a target future observation using a simple model depending only on the historical trajectory of the univariate target series. Modeller Two (M2) produces a multivariate forecast distribution, where one of the variables is the same target variable. Who produces the best forecasts? How are we to assess the forecast distributions produced by the competitors?
This paper first discusses situations where the competitive scenario reasonably occurs, and why a naïve comparisons may be problematic. We offer both theoretical and practical suggestions for the construction of appropriate scoring rules to compare the performance of the resulting forecast distributions. Finally, we use the insights gained to combine the competing forecasts into a single target forecast distribution, in case M1 and M2 decide that they want to collaborate.