Monday, 20th April, 2015, 1:30pm – 5pm
Waheed U. Bajwa
, Rutgers, The State University of New Jersey
Marco F. Duarte
, University of Massachusetts, Amherst
While sparsity has been invoked in signal processing for more than four decades, it is only in the last decade or so that we have come to understand many of the theoretical underpinnings of sparse signal processing. These recent theoretical developments have given the practitioners in numerous application areas a number of insights. Many of these theoretical developments have come in the context of linear regression, statistical model selection, and sampling/ill-posed inverse problems. These insights in turn have a potential to affect practice in the areas of biomarker identification from DNA microarray data, optical, hyperspectral and medical imaging, radar, sonar and array processing, sensor networks, wireless communications, etc. Potential impact of these developments is evidenced by the fact that, since 2006, ICASSP, ICIP, and SSP have collectively held close to a dozen tutorials that deal with various aspects of sparse signal processing, such as some of the basic theory with respect to sampling, quantization and optimization techniques, and its applications, such as MRI, optical imaging, and medical imaging.
Despite the recent developments reported in these tutorials, practitioners in many areas can be found asking the following question: how can I either design reliable systems that adhere to the physical constraints posed by my application or guarantee that a given system or set of data are sufficient for reliable statistical inference? This is because initial literature on spare signal processing and, in turn, earlier tutorials, have focused on randomized arguments and unstructured assumptions that give confidence to practitioners about the usefulness of sparse signal processing but fail to provide a verifiable theory of sparse signal processing. Consider, for instance, the widely-referenced restricted isometry property (RIP) in the context of compressive sampling. While random measurement matrices have been shown to satisfy the RIP with high probability, very few practitioners are aware of how these results translate to arbitrary, often structured, measurement matrices arising in many real-world applications. This is because the RIP cannot be explicitly verified in polynomial time for a given measurement matrix. Similar limitations also exist for the theory of sparse signal processing in the context of linear regression, model selection, etc.
Our goal in this tutorial is to help practitioners in the transition from the theory of sparse signal processing to its real-world implementation, especially for those working in applications where the standard random designs are not translatable into a practical exploitation of signal sparsity. To achieve this goal, we leverage some of our recent results, as well as several from other researchers in the area (see the tutorial bibliography), and connect these results to applications in model selection, linear regression, and sampling in the context of wireless communications, cellular networks, optical imaging, and radar applications. Because of the nature of this tutorial, we believe it is of interest to a broad range of audience in the signal processing community who do not directly work in the sparse signal processing area but who are interested in figuring out quick means of translating theory to practice. We also believe that this tutorial is of interest to a broad range of audience from industry, where sparse signal processing is still being evaluated in terms of its potential for changing the design of commercial systems.
Complete Printable Package
– 88 pages (2 slides/pg; 12.7 MB pdf)
Part I of Slides
– 76 slides (4.6 MB pdf)
Part II of Slides
– 73 slides (1.8 MB pdf)
– 12 pages (0.2 MB pdf)