Abstract:
|
The group fused lasso (GFL) is a powerful technique for detecting change points and segmenting multivariate signals. In particular, its penalized estimation framework allows users to flexibly explore data representations with varying numbers of change points and levels of sparsity. At the mathematical level, the GFL uses non-differentiable penalty functions to induce sparsity and constancy in model parameters. The method's implementation is akin to a nonsmooth convex optimization problem typically handled with proximal algorithms. Although these algorithms are known to be consistent under very mild regularity conditions, their computational cost may become burdensome in high-dimensional settings. In this paper we develop a new GFL method that combines proximal methods and blockwise descent/fusion cycles to achieve faster computations. We compare the numerical accuracy and computational speed of the proposed method to state-of-the-art GFL algorithms. We also present an application to the study of dynamic brain connectivity in resting-state fMRI data.
|