Author_Institution :
Dept. of Electr. & Comput. Eng., Rice Univ., Houston, TX, USA
Abstract :
Summary form only given. We describe a set of prior distributions whose independent and identically distributed (iid) realizations result in compressible signals. A signal is compressible when the sorted magnitudes of its coefficients exhibit a power-law decay so that the signal can be well-approximated by a sparse signal. By definition, a sparse signal has only a small fraction of non-zero coefficients compared to its ambient dimension. These distributions, dubbed compressible priors, analytically summarize compressible signals and shed new light to the solution of under-determined linear regression problems by building upon the literature on sparse signal recovery. Our main results are summarized as follows: 1) By using order statistics, we show that iid realizations of generalized Pareto, Student´s t, lognormal, generalized extreme value, and log-logistics distributions can result in compressible signals. We quantify the compressibility of the signal realizations from these compressible priors as a function of their parameters. 2) We point out a common misconception about the generalized Gaussian distribution (GGD): GGD result in signals that lose their compressibility as their dimensions grow. For instance, special cases of the GGD distribution, e.g., Laplacian distribution, are commonly used as sparsity promoting priors in compressive sensing and sparse Bayesian learning problems where the number of observations is assumed to grow logarithmically with the number of unknowns. We show that signals generated from Laplacian distribution can only be stably embedded into lower dimensions that grow proportional to the signal dimension. Hence, we identify an inconsistency between the decoding algorithms motivated by the GGD distribution and their sparse solutions. 3) We use compressible priors as a scaffold to build new decoding algorithms based on Bayesian inference arguments. The objective of these algorithms is to approximate the signal realization from a compressi- ble prior as opposed to naively producing sparse solutions. Some of these new algorithms are variants of the popular iterative reweighting schemes. We show (i) how tuning of these algorithms explicitly depends on the parameters of the compressible prior of the signal, and (ii) how to learn the parameters of the signal´s compressible prior on the fly while recovering the signal. To illustrate the concepts, we study various order statistics of natural images from the Berkeley Segmentation Dataset, including pixel gradients and wavelet coefficients.
Keywords :
approximation theory; encoding; inference mechanisms; regression analysis; signal processing; Bayesian inference; Laplacian distribution; compressible signals; compressive sensing; decoding algorithms; generalized Gaussian distribution; log-logistics distributions; order statistics; sparse Bayesian learning problems; sparse signal recovery; underdetermined linear regression problems; wavelet coefficients; Bayesian methods; Gaussian distribution; Image coding; Inference algorithms; Iterative algorithms; Laplace equations; Linear regression; Pareto analysis; Signal analysis; Statistical distributions;