The analysis of a screening experiment often focuses more on model selection than estimation so screening design criteria should be based on the desired model selection procedure. One such approach is via penalized estimation like the LASSO and Dantzig selector that shrink estimates to 0. In this talk, we introduce a new design theory for the LASSO that promotes desirable model selection properties. The theory resembles local and Bayesian optimality approaches for nonlinear linear models, but often fixes only the sign of the model parameters, rather than their specific values. The design measures of interest involve probabilities of either successful model or sign recovery, the latter being more stringent. When sign effects are correctly guessed, the theory establishes the superiority of constrained, positive Var(s) optimal supersaturated designs over conventional supersaturated designs in terms of model selection. Otherwise, the theory establishes that heuristic orthogonality measures (e.g., E(s2) and UE(s2)), generate equivalently powerful designs. Finally, we discuss criteria based on the thresholded lasso that describes behavior of the LASSO solution path for a design.