Share Email Print

Proceedings Paper

Compressed sensing and generative models
Author(s): Eric Price
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The goal of compressed sensing is make use of image structure to estimate an image from a small number of linear measurements. The structure is typically represented by sparsity in a well-chosen basis. We describe how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all -instead, we suppose that vectors lie near the range of a generative model G: Rk → Rn. Our main theorem here is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice; this is O(kd log n) for typical d-layer neural networks. The above result describes how to use a model to recover a signal from noisy data. But if the data is noisy, how can we learn the generative model in the first place? This paper will describe how to incorporate the measurement process in generative adversarial network (GAN) training. Even if the noisy data does not uniquely identify the non-noisy signal, the distribution of noisy data may still uniquely identify the distribution of non-noisy signals. In presenting the above results we summarize and synthesize the work of [BJPD17] and [BPD18]. We then add some observations on the limitations of the approaches.

Paper Details

Date Published: 9 September 2019
PDF: 7 pages
Proc. SPIE 11138, Wavelets and Sparsity XVIII, 111380R (9 September 2019); doi: 10.1117/12.2529939
Show Author Affiliations
Eric Price, The Univ. of Texas at Austin (United States)

Published in SPIE Proceedings Vol. 11138:
Wavelets and Sparsity XVIII
Dimitri Van De Ville; Manos Papadakis; Yue M. Lu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?