Abstract :
Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN/k) measurements y = Phix. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate mathhat x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ||x - mathhat x ||lN2 can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 log p of these m measurements and compute a sequence of possible estimates (mathhat xj)j=1p to x from the m -10logp remaining measurements; the errors ||x - mathhat xj ||lN2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.
Keywords :
approximation theory; decoding; probability; vectors; compressed sensing decoding algorithm; k-term approximation; probability; quality assurance; real-valued vector; Approximation algorithms; Compressed sensing; Costs; Decoding; Encoding; Mathematics; Pixel; Quality assurance; Signal processing algorithms; Vectors; Best $k$-term approximation; Johnson–Lindenstrauss (JL) lemma; compressed sensing (CS); cross validation; encoding/decoding; error estimates; measurements;