Abstract:
|
We consider a minimax decision theoretic approach to experimental design when it is explicitly acknowledged that a parametric regression model is only an approximation, differing from the true mean response by the addition of a nonparametric discrepancy function. When the class of possible discrepancies is defined by a bound on the L2-norm, minimax prediction error fails as a selection criterion for finite deterministic designs. This occurs because any deterministic design has infinite maximum risk. However, in other design problems it has been known since at least the 1980s that it is often minimax optimal to generate design realisations probabilistically according to a randomized strategy. The most familiar example is a simple comparative experiment, in which most commonly there is a fixed and deterministic set of treatments that is allocated randomly to experimental units. We present a novel class of robust random designs for regression in which the treatment set is also generated probabilistically. For these random designs the maximum risk is finite, and sufficiently tractable to enable selection of minimax efficient strategies via numerical optimization algorithms.
|