Abstract:
|
We study the theoretical properties of learning a dictionary from a set of N signals x_i ? R^K for i = 1,...,N via l_1-minimization. We assume that the signals are generated as i.i.d. random linear combinations of the K atoms from a complete reference dictionary D_0 ? R^{K×K}. We consider two generative models for the random linear coefficients: the s-sparse Gaussian model and the Bernoulli-Gaussian model. First, for the population case and under each of the two generative models, we establish a sufficient and almost necessary condition for the reference dictionary D_0 to be a local minimum of the l_1-norm objective function. Our condition covers both the sparse and dense cases of signal generation and significantly improves the sufficient condition by Gribonval and Schnass (2010). We also provide sharp and easy-to-compute lower and upper bounds for the quantities involved in our conditions. With these bounds, we show that local identifiability is possible with sparsity level up to the order O(?^{?2}) for a complete ?-coherent reference dictionary. Our results also translate to the finite sample case with high probability provided that the number of signals scales as O(K log K).
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.