Online Program Home
My Program

Abstract Details

Activity Number: 410 - High-Dimensional Regression
Type: Contributed
Date/Time: Tuesday, July 31, 2018 : 2:00 PM to 3:50 PM
Sponsor: Biometrics Section
Abstract #330687
Title: Constrained Regression via Majorization-Minimization
Author(s): Jason Xu* and Kenneth Lange
Companies: UCLA and UCLA
Keywords: Majorization-minimization; Sparse covariance estimation; Sequential minimization; EM algorithms; Non-convex optimization

The majorization-minimization (MM) principle generalizes expectation-maximization (EM) algorithms to settings beyond missing data. Like EM, the idea relies on transferring optimization of a difficult objective (i.e. the likelihood under missing data) to a sequence of simpler subproblems (i.e. maximizing the expectation of the likelihood under complete data). We discuss MM approaches to regression problems under constraints such as sparsity and low-rankness, and simple recipes for building the family of surrogate functions to be iteratively optimized. Through this lens, we revisit sparse covariance estimation and high-dimensional regression. We present strong empirical performance on several data examples and convergence guarantees even for non-convex objectives.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2018 program