Abstract:
|
The object of inverse prediction is to infer the value of a condition x_* that caused an observed response y_*, based on a linear model relating responses to conditions fit to training data. Four methods of inverse prediction are investigated here. Their performances are compared in terms of the rates at which they reject potential values x_0 of the true condition x_*: that is, in terms of powers of tests. The four methods are (1) inverse regression (IR), based on a point estimate of x_* from y_*, along with a delta-method approximation to its variance to find an interval estimate; (2) reverse regression (RR), in which x is modeled by ordinary least squares in terms of y to get a prediction interval estimate of x_* at y_*; (3) inverse prediction (IP), which produces a confidence set on x_* as the values of x_0 for which y_* is not rejected as an outlier; and (4) inverse prediction extended to models in which the variance of the response increases with the mean (IM). We compare performances of the four methods in terms of power functions of tests via simulated data from simple linear regression models with constant variance and with non-constant variance.
|