On some aspects of inverse problems in image processing
This work is concerned with two image-processing problems, image deconvolution with incomplete observations and data fusion of spectral images, and with some of the algorithms that are used to solve these and related problems. In image-deconvolution problems, the diagonalization of the blurring operator by means of the discrete Fourier transform usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods, or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. We propose a new deconvolution framework for images with incomplete observations that allows one to work with diagonalizable convolution operators, and therefore is very fast. The framework is also an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. The data-fusion problem of inferring a hyperspectral image with high spectral and spatial resolutions from a spatially-degraded hyperspectral image and a multispectral image retrieved from the same geographical area has been a subject of recent research. We formulate this problem as the minimization of a convex function containing two quadratic data-fitting terms and an edge-preserving regularizer. The regularizer, a form of vector total variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. We obtain an algorithm that outperforms the state of the art, as illustrated in a series of experiments. The algorithms that are used to solve problems with sparsity-inducing regularizers are usually generic, in the sense that they do not take into account the sparsity of the solution in any particular way. However, methods such as the semismooth Newton and the active-set ones are able to take advantage of this sparsity to accelerate their convergence. We show how to extend these algorithms in different directions, and study their convergence in (possibly infinite-dimensional) real Hilbert spaces. Additionally, we discuss the use of second-order information in the ADMM when solving L2+regularizer minimization problems.
