General Approaches for Solving Inverse Problems with Arbitrary Signal Models

Ill-posed inverse problems appear in many signal and image processing applications, such as deblurring, super-resolution and compressed sensing. The common approach to address them is to design a specific algorithm, or recently, a specific deep neural network, for each problem. Both signal processing and machine learning tactics have drawbacks: traditional reconstruction strategies exhibit limited performance for complex signals, such as natural images, due to the hardness of their mathematical modeling; while modern works that circumvent signal modeling by training deep convolutional neural networks (CNNs) suffer from a huge performance drop when the observation model used in training is inexact. In this work, we develop and analyze reconstruction algorithms that are not restricted to a specific signal model and are able to handle different observation models. Our main contributions include: (a) We generalize the popular sparsity-based CoSaMP algorithm to any signal model that can be expressed as a union of low dimensional linear subspaces, equip it with general recovery guarantees and apply it for compressed sensing under a novel signal model that combines the sparse synthesis and cosparse analysis models. (b) We propose the Back-Projection (BP) fidelity term for ill-posed linear inverse problems, which is a novel alternative to the traditional Least Squares (LS) term. Using the proximal gradient method with the BP term and off-the-shelf denoisers (e.g., pre- trained CNN denoisers) yields a new reconstruction framework that we call ?Iterative Denoising and Backward Projections? (IDBP). Our IDBP method gives excellent results in different tasks, requires less parameter tuning than LS-based methods, and is accompanied with theoretical motivations. (c) We propose an image-adaptive approach, where we tune CNN denoisers or genera- tive adversarial networks (GANs) in test-time to specialize them on the given observations. This approach leads to a performance boost when these networks serve as priors in inverse problems. The improvement is extremely significant for GANs, as our method mitigates the effect of their limited representation capabilities (often referred to in the literature as ?mode collapse?).

File Type: pdf
File Size: 17 MB
Publication Year: 2020
Author: Tirer, Tom
Supervisors: Raja Giryes
Institution: Tel Aviv University
Keywords: Inverse problems, image restoration, (non)convex optimization, sparsity, deep learning, back-projection, objective functions