Author_Institution :
Project METISS, IRISA, Rennes, France
Abstract :
This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y1 . . . yN), yn ∈ ℝd of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x1 . . . xN), xn ∈ ℝK, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ1-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.
Keywords :
dictionaries; matrix decomposition; probability; signal processing; sparse matrices; Bernoulli-Gaussian sparse model; dictionary coefficient matrix; dictionary identification; l1-minimization; logarithmic factor; probability; signal processing; sparse matrix factorization; training samples; training signals; Blind source separation; Compressed sensing; Dictionaries; Harmonic analysis; Independent component analysis; Noise reduction; Signal processing; Signal sampling; Source separation; Sparse matrices; $ell_1$-minimization; blind source localization; blind source separation; compressed sensing; dictionary identification; dictionary learning; independent component analysis; nonconvex optimization; random matrices; sparse representation;