This series of three talks takes us on a journey that starts with the introduction of the lasso in the 1990s, and brings us to date on some of the vast array of applications that have emerged.
I: We motivate the need for sparsity with wide data, and then chronicle the invention of lasso and the quest for good software. Several examples will be given, culminating with lasso models for polygenic traits using GWAS. We end with a survey of some active areas of research not covered in the remaining two talks.
II: Matrix completion re-emerged during the Netflix competition as a way to compute a low-rank SVD in the presence of missing data, and for imputing missing values. We discuss some algorithms and aspects of this problem, and illustrate its application in recommender systems and modeling sparse longitudinal multivariate data.
III: The graphical lasso builds sparse inverse covariance matrices to capture the conditional independencies in multivariate Gaussian data. We discuss this approach and extensions, and then illustrate its use for anomaly detection and imputation. We also discuss the group lasso, with applications in detecting interactions and additive model selection.
|