We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte--Carlo studies. The proposed method is applied to a functional magnetic resonance imaging (fMRI) related to attention deficit hyperactivity disorder (ADHD).