Online Program Home
  My Program

Abstract Details

Activity Number: 411 - Developments in the Construction of Experimental Designs
Type: Contributed
Date/Time: Tuesday, August 1, 2017 : 2:00 PM to 3:50 PM
Sponsor: Section on Physical and Engineering Sciences
Abstract #324434
Title: Designing for What's Important: a Comparison of Bayesian and General Weighted Optimality Criteria
Author(s): Jonathan W. Stallings*
Companies: North Carolina State University
Keywords: Weighted optimality ; Bayesian optimality ; Factorial experiments ; Optimal design
Abstract:

How one compares experimental designs depends heavily on the analysis goals (minimizing estimation variance, estimation bias, prediction variance, etc.). If one decides to focus on minimizing estimation variance, there is still the question of what has to be estimated and whether some effects are more important to estimate. DuMouchel and Jones (1994) tackled this issue with factorial experiments through a Bayesian framework assigning relative importance through prior variances. The Bayesian optimal design minimizes a summary of posterior variances and hence focuses on efficient estimation of effects with large prior variance. Stallings and Morgan (2015) introduced a class of general weighted optimality criteria that also allows the experimenter to assign relative importance to model effects through weighted variances. In this talk I will compare the relative advantages and disadvantages of the two approaches in terms of their interpretability and utility for tailoring experiments to a researcher's goals. Cases of optimal blocked and unblocked factorial experiments under the two criteria are used to demonstrate when the optimal designs coincide or diverge.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2017 program

 
 
Copyright © American Statistical Association