The starting assumption for compressed sensing (CS) is that the underlying signal is sparse in some basis, e.g., there are a maximum of non-zero Fourier-coefficients for an $s$-sparse signal. And real life experiences do show that the signals under consideration are often sparse.
The question is - given a signal, before sending out the compressively-sampled bits to the receiver and let her recover to the best of her abilities, is there a way to tell what its sparsity is, and if it is a suitable candidate for compressed sensing in the first place?
Alternatively, is there any additional/alternative characterization of sparsity that can tell us quickly whether CS will be useful or not. One can trivially see that the sender could do exactly what the receiver will do with some randomly chosen set of measurements, and then try to figure out the answer. But is there any alternate way to resolve this question ?
My suspicion is that something like this must have been studied, but I couldn't find a good pointer.
Note : I had posted this question in Mathoverflow, a few weeks back, but didn't get any answer. Hence the cross-post.
Answer
Indeed, there are ways in which sparsity, or information content, may be estimated at the acquisition device. The details, practicality, and actual usefulness of doing so is debatable and heavily dependent upon the context in which it is applied. In the case of imaging, one could determine areas of an image which are more or less compressibile in a predetermined basis. For example, see "Saliency-Based Compressive Sampling for Image Signals" by Yu et al. In this case, the additional complexity requirements placed on the acquisition device provide marginal gains.
With regards to your questions about making determinations as to the usefulness of Compressed Sensing on a given signal at the time of acquisition: If the signal in question adheres to any kind of model known a priori, Compressed Sensing is possible. Accurate recovery is simply dependent upon ratio between the number of measurements taken and the degree to which the sampled signal adheres to your model. If it is a bad model, you won't get past the phase transition. If it is a good model, then you will be able to calculate an accurate reconstruction of the original signal. Additionally, Compressed Sensing measurements are, in general, future proofed. If you have a given number of measurements for a signal which are insufficient in number of accurately recover the original signal using the model you have today, then it is still possible to devise a better model tomorrow for which these measurements are sufficient for accurate recovery.
Additional Note (edit): The acquisition approach mentioned in your question sounded quite near to adaptive Compressed Sensing, so I thought the following might be of interest for the readers of this question. Recent results by Arias-Castro, Candes, and Davenport have shown that adaptive measurement strategies cannot, in theory, offer any significant gains over non-adaptive (i.e. blind) Compressed Sensing. I refer readers to their work, "On the Fundamental Limits of Adaptive Sensing" which should be appearing in the ITIT soon.
No comments:
Post a Comment