Abstract:
|
Traditionally, frequentist alphabetic optimal designs are found by minimising a function of the inverse Fisher information matrix. For example, the popular D-optimal design is found by minimising the log determinant of the inverse Fisher information. Alternatively a decision-theoretic design is given by minimising the risk function defined as the expectation of a suitably chosen loss function. Typically, the risk function will be analytically intractable. However, a 1st-order asymptotic approximation (with respect to the sample size of the experiment) to the risk function means that many popular alphabetic optimal designs are approximations to decision-theoretic designs under certain loss functions. Often, designed experiments have small sample sizes meaning that an alphabetic optimal design can be significantly sub-optimal when compared to the corresponding decision-theoretic design. However generating decision-theoretic designs is non-trivial due to the intractability of the risk function. Computational algorithms are proposed to generate decision-theoretic designs. Designs are then found for a range of models and, where appropriate, compared to alphabetic optimal designs.
|