Systems | Information | Learning | Optimization
 

Towards a Theoretical Understanding of Inverse Problems with Neural Priors

Inverse problems of various flavors span all of science, engineering, and design. Over the last five years, approaches based on neural networks have emerged as the tool of choice for solving such problems. However, a clear theoretical understanding of how well such approaches perform — together with quantitative sample-complexity and running-time bounds — remains elusive.
We will first discuss a natural algorithmic approach for solving inverse problems using pre-trained neural generative models (such as GANs). This approach will lead to upper (and in some cases, tight) bounds for inverse imaging problems such as compressive sensing and phase retrieval. The main drawback of GAN models is the requirement of massive amounts of training data up front. To alleviate this, we will discuss and analyze algorithms for inverse imaging using untrained network priors; these approaches are successful even when no training data is present.
Finally, we will introduce a new family of generative model priors that can naturally interpolate between the data-rich and data-poor regimes. We discuss their utility, along with theoretical guarantees, for solving certain families of partial differential equations.
October 9 @ 12:30
12:30 pm (1h)

Discovery Building, Researchers’ Link

Chinmay Hegde