Abstract:
|
How one compares experimental designs depends heavily on the analysis goals (minimizing estimation variance, estimation bias, prediction variance, etc.). If one decides to focus on minimizing estimation variance, there is still the question of what has to be estimated and whether some effects are more important to estimate. DuMouchel and Jones (1994) tackled this issue with factorial experiments through a Bayesian framework assigning relative importance through prior variances. The Bayesian optimal design minimizes a summary of posterior variances and hence focuses on efficient estimation of effects with large prior variance. Stallings and Morgan (2015) introduced a class of general weighted optimality criteria that also allows the experimenter to assign relative importance to model effects through weighted variances. In this talk I will compare the relative advantages and disadvantages of the two approaches in terms of their interpretability and utility for tailoring experiments to a researcher's goals. Cases of optimal blocked and unblocked factorial experiments under the two criteria are used to demonstrate when the optimal designs coincide or diverge.
|