The majorization-minimization (MM) principle generalizes expectation-maximization (EM) algorithms to settings beyond missing data. Like EM, the idea relies on transferring optimization of a difficult objective (i.e. the likelihood under missing data) to a sequence of simpler subproblems (i.e. maximizing the expectation of the likelihood under complete data). We discuss MM approaches to regression problems under constraints such as sparsity and low-rankness, and simple recipes for building the family of surrogate functions to be iteratively optimized. Through this lens, we revisit sparse covariance estimation and high-dimensional regression. We present strong empirical performance on several data examples and convergence guarantees even for non-convex objectives.