Learned Image SR: Advancing in Modeling and Generative Sample Selection

Super-resolution (SR) is an ill-posed inverse problem focused on reconstructing high-resolution images from low-resolution counterparts by recovering missing details. Despite advancements, SR faces persistent challenges in generalization, balancing fidelity and perceptual quality, mitigating artifacts, and ensuring trustworthy results. This thesis tackles these issues through innovations in model architecture, loss design, and sample selection. Central to our contributions is the use of wavelet loss, which improve the ability of SR models to distinguish genuine details from artifacts. By leveraging these losses in both GAN-based and transformer-based models, we achieve enhanced fidelity and perceptual quality. Furthermore, we augment transformer architectures with convolutional non-local sparse attention blocks and wavelet-based training, delivering state-of-the-art performance across diverse datasets. For generative models, we address the challenge of selecting a single trustworthy solution from the diverse outputs generated by flow-based and diffusion-based models. We propose image fusion ...

Cansu Korkmaz — Koc University


Deep learning for semantic description of visual human traits

The recent progress in artificial neural networks (rebranded as “deep learning”) has significantly boosted the state-of-the-art in numerous domains of computer vision offering an opportunity to approach the problems which were hardly solvable with conventional machine learning. Thus, in the frame of this PhD study, we explore how deep learning techniques can help in the analysis of one the most basic and essential semantic traits revealed by a human face, namely, gender and age. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes. Convolutional Neural Network (CNN) has currently become a standard model for image-based object recognition in general, and therefore, is a natural choice for addressing the first of these two problems. However, our preliminary studies have shown that the ...

Antipov, Grigory — Télécom ParisTech (Eurecom)


Sensing physical fields: Inverse problems for the diffusion equation and beyond

Due to significant advances made over the last few decades in the areas of (wireless) networking, communications and microprocessor fabrication, the use of sensor networks to observe physical phenomena is rapidly becoming commonplace. Over this period, many aspects of sensor networks have been explored, yet a thorough understanding of how to analyse and process the vast amounts of sensor data collected remains an open area of research. This work, therefore, aims to provide theoretical, as well as practical, advances this area. In particular, we consider the problem of inferring certain underlying properties of the monitored phenomena, from our sensor measurements. Within mathematics, this is commonly formulated as an inverse problem; whereas in signal processing, it appears as a (multidimensional) sampling and reconstruction problem. Indeed it is well known that inverse problems are notoriously ill-posed and very demanding to solve; meanwhile ...

Murray-Bruce, John — Imperial College London


Model-based Techniques and Diffusion Models for Speech Dereverberation

Reverberation occurs in most of our environments and often degrades the intelligibility and quality of human speech, with an aggravated effect on hearing-impaired listeners. Meanwhile, the evolution of technologies for multimedia entertainment, communications and medical applications has led to a greater demand for improved sound quality. Therefore, many embedded devices now include a dereverberation algorithm, which aims to recover the anechoic component of speech. Dereverberation is an arduous task and an ill-posed inverse problem: even perfectly knowing the room acoustics does not guarantee to obtain a perfectly dereverberated signal. Furthermore, in most real-life cases, such knowledge is not available and therefore most dereverberation algorithms are blind, i.e. they must extract information from the reverberant speech signal only. Traditional dereverberation algorithms derive anechoic speech estimators exploiting statistical properties of speech signals, distributional assumptions and even knowledge of room acoustics when available. ...

Lemercier, Jean-Marie — University of Hamburg


Tradeoffs and limitations in statistically based image reconstruction problems

Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...

Kragh, Tom — University of Michigan


Cosparse regularization of physics-driven inverse problems

Inverse problems related to physical processes are of great importance in practically every field related to signal processing, such as tomography, acoustics, wireless communications, medical and radar imaging, to name only a few. At the same time, many of these problems are quite challenging due to their ill-posed nature. On the other hand, signals originating from physical phenomena are often governed by laws expressible through linear Partial Differential Equations (PDE), or equivalently, integral equations and the associated Green’s functions. In addition, these phenomena are usually induced by sparse singularities, appearing as sources or sinks of a vector field. In this thesis we primarily investigate the coupling of such physical laws with a prior assumption on the sparse origin of a physical process. This gives rise to a “dual” regularization concept, formulated either as sparse analysis (cosparse), yielded by a PDE ...

Kitić, Srđan — Université de Rennes 1


Joint Modeling and Learning Approaches for Hyperspectral Imaging and Changepoint Detection

In the era of artificial intelligence, there has been a growing consensus that solutions to complex science and engineering problems require novel methodologies that can integrate interpretable physics-based modeling approaches with machine learning techniques, from stochastic optimization to deep neural networks. This thesis aims to develop new methodological and applied frameworks for combining the advantages of physics-based modeling and machine learning, with special attention to two important signal processing tasks: solving inverse problems in hyperspectral imaging and detecting changepoints in time series. The first part of the thesis addresses learning priors in model-based optimization for solving inverse problems in hyperspectral imaging systems. First, we introduce a tuning-free Plug-and-Play algorithm for hyperspectral image deconvolution (HID). Specifically, we decompose the optimization problem into two iterative sub-problems, learn deep priors to solve the blind denoising sub-problem with neural networks, and estimate hyperparameters with ...

Xiuheng Wang — Université Côte d'Azur


Gaussian Process Modelling for Audio Signals

Audio signals are characterised and perceived based on how their spectral make-up changes with time. Uncovering the behaviour of latent spectral components is at the heart of many real-world applications involving sound, but is a highly ill-posed task given the infinite number of ways any signal can be decomposed. This motivates the use of prior knowledge and a probabilistic modelling paradigm that can characterise uncertainty. This thesis studies the application of Gaussian processes to audio, which offer a principled non-parametric way to specify probability distributions over functions whilst also encoding prior knowledge. Along the way we consider what prior knowledge we have about sound, the way it behaves, and the way it is perceived, and write down these assumptions in the form of probabilistic models. We show how Bayesian time-frequency analysis can be reformulated as a spectral mixture Gaussian process, ...

William Wilkinson — Queen Mary University of London


Solving inverse problems in room acoustics using physical models, sparse regularization and numerical optimization

Reverberation consists of a complex acoustic phenomenon that occurs inside rooms. Many audio signal processing methods, addressing source localization, signal enhancement and other tasks, often assume absence of reverberation. Consequently, reverberant environments are considered challenging as state-ofthe-art methods can perform poorly. The acoustics of a room can be described using a variety of mathematical models, among which, physical models are the most complete and accurate. The use of physical models in audio signal processing methods is often non-trivial since it can lead to ill-posed inverse problems. These inverse problems require proper regularization to achieve meaningful results and involve the solution of computationally intensive large-scale optimization problems. Recently, however, sparse regularization has been applied successfully to inverse problems arising in different scientific areas. The increased computational power of modern computers and the development of new efficient optimization algorithms makes it possible ...

Antonello, Niccolò — KU Leuven


Sparsity Models for Signals: Theory and Applications

Many signal and image processing applications have benefited remarkably from the theory of sparse representations. In its classical form this theory models signal as having a sparse representation under a given dictionary -- this is referred to as the "Synthesis Model". In this work we focus on greedy methods for the problem of recovering a signal from a set of deteriorated linear measurements. We consider four different sparsity frameworks that extend the aforementioned synthesis model: (i) The cosparse analysis model; (ii) the signal space paradigm; (iii) the transform domain strategy; and (iv) the sparse Poisson noise model. Our algorithms of interest in the first part of the work are the greedy-like schemes: CoSaMP, subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). It has been shown for the synthesis model that these can achieve a stable recovery ...

Giryes, Raja — Technion


Bayesian resolution of the non linear inverse problem of Electrical Impedance Tomography with Finite Element modeling

Resistivity distribution estimation, widely known as Electrical Impedance Tomography (EIT), is a non linear ill-posed inverse problem. However, the partial derivative equation ruling this experiment yields no analytical solution for arbitrary conductivity distribution. Thus, solving the forward problem requires an approximation. The Finite Element Method (FEM) provides us with a computationally cheap forward model which preserves the non linear image-data relation and also reveals sufficiently accurate for the inversion. Within the Bayesian approach, Markovian priors on the log-conductivity distribution are introduced for regularization. The neighborhood system is directly derived from the FEM triangular mesh structure. We first propose a maximum a posteriori (MAP) estimation with a Huber-Markov prior which favours smooth distributions while preserving locally discontinuous features. The resulting criterion is minimized with the pseudo-conjugate gradient method. Simulation results reveal significant improvements in terms of robustness to noise, computation rapidity ...

Martin, Thierry — Laboratoire des signaux et systèmes


Bayesian Fusion of Multi-band Images: A Powerful Tool for Super-resolution

Hyperspectral (HS) imaging, which consists of acquiring a same scene in several hundreds of contiguous spectral bands (a three dimensional data cube), has opened a new range of relevant applications, such as target detection [MS02], classification [C.-03] and spectral unmixing [BDPD+12]. However, while HS sensors provide abundant spectral information, their spatial resolution is generally more limited. Thus, fusing the HS image with other highly resolved images of the same scene, such as multispectral (MS) or panchromatic (PAN) images is an interesting problem. The problem of fusing a high spectral and low spatial resolution image with an auxiliary image of higher spatial but lower spectral resolution, also known as multi-resolution image fusion, has been explored for many years [AMV+11]. From an application point of view, this problem is also important as motivated by recent national programs, e.g., the Japanese next-generation space-borne ...

Wei, Qi — University of Toulouse


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Signal and Image Processing Algorithms Using Interval Convex Programming and Sparsity

In this thesis, signal and image processing algorithms based on sparsity and interval convex programming are developed for inverse problems. Inverse signal processing problems are solved by minimizing the ℓ1 norm or the Total Variation (TV) based cost functions in the literature. A modified entropy functional approximating the absolute value function is defined. This functional is also used to approximate the ℓ1 norm, which is the most widely used cost function in sparse signal processing problems. The modified entropy functional is continuously differentiable, and convex. As a result, it is possible to develop iterative, globally convergent algorithms for compressive sensing, denoising and restoration problems using the modified entropy functional. Iterative interval convex programming algorithms are constructed using Bregman’s D-Projection operator. In sparse signal processing, it is assumed that the signal can be represented using a sparse set of coefficients in ...

Kose, Kivanc — Bilkent University


Joint Sparsity-Driven Inversion and Model Error Correction for SAR Imaging

Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this thesis is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. In this technique, phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm each iteration of which consists of consecutive steps of ...

Önhon, N. Özben — Faculty of Engineering and Natural Sciences, Sabancı University

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.