Pointwise shape-adaptive DCT image filtering and signal-dependent noise estimation

When an image is acquired by a digital imaging sensor, it is always degraded by some noise. This leads to two basic questions: What are the main characteristics of this noise? How to remove it? These questions in turn correspond to two key problems in signal processing: noise estimation and noise removal (so-called denoising). This thesis addresses both abovementioned problems and provides a number of original and effective contributions for their solution. The first part of the thesis introduces a novel image denoising algorithm based on the low-complexity Shape-Adaptive Discrete Cosine Transform (SA-DCT). By using spatially adaptive supports for the transform, the quality of the filtered image is high, with clean edges and without disturbing artifacts. We further present extensions of this approach to image deblurring, deringing and deblocking, as well as to color image filtering. For all these applications, ...

Foi, Alessandro — Tampere University of Technology


Tradeoffs and limitations in statistically based image reconstruction problems

Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...

Kragh, Tom — University of Michigan


Kernel PCA and Pre-Image Iterations for Speech Enhancement

In this thesis, we present novel methods to enhance speech corrupted by noise. All methods are based on the processing of complex-valued spectral data. First, kernel principal component analysis (PCA) for speech enhancement is proposed. Subsequently, a simplification of kernel PCA, called pre-image iterations (PI), is derived. This method computes enhanced feature vectors iteratively by linear combination of noisy feature vectors. The weighting for the linear combination is found by a kernel function that measures the similarity between the feature vectors. The kernel variance is a key parameter for the degree of de-noising and has to be set according to the signal-to-noise ratio (SNR). Initially, PI were proposed for speech corrupted by additive white Gaussian noise. To be independent of knowledge about the SNR and to generalize to other stationary noise types, PI are extended by automatic determination of the ...

Leitner, Christina — Graz University of Technology


Advances in DFT-Based Single-Microphone Speech Enhancement

The interest in the field of speech enhancement emerges from the increased usage of digital speech processing applications like mobile telephony, digital hearing aids and human-machine communication systems in our daily life. The trend to make these applications mobile increases the variety of potential sources for quality degradation. Speech enhancement methods can be used to increase the quality of these speech processing devices and make them more robust under noisy conditions. The name "speech enhancement" refers to a large group of methods that are all meant to improve certain quality aspects of these devices. Examples of speech enhancement algorithms are echo control, bandwidth extension, packet loss concealment and noise reduction. In this thesis we focus on single-microphone additive noise reduction and aim at methods that work in the discrete Fourier transform (DFT) domain. The main objective of the presented research ...

Hendriks, Richard Christian — Delft University of Technology


Sparsity Models for Signals: Theory and Applications

Many signal and image processing applications have benefited remarkably from the theory of sparse representations. In its classical form this theory models signal as having a sparse representation under a given dictionary -- this is referred to as the "Synthesis Model". In this work we focus on greedy methods for the problem of recovering a signal from a set of deteriorated linear measurements. We consider four different sparsity frameworks that extend the aforementioned synthesis model: (i) The cosparse analysis model; (ii) the signal space paradigm; (iii) the transform domain strategy; and (iv) the sparse Poisson noise model. Our algorithms of interest in the first part of the work are the greedy-like schemes: CoSaMP, subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). It has been shown for the synthesis model that these can achieve a stable recovery ...

Giryes, Raja — Technion


Bayesian Approaches in Image Source Seperation

In this thesis, a general solution to the component separation problem in images is introduced. Unlike most existing works, the spatial dependencies of images are modelled in the separation process with the use of Markov random fields (MRFs). In the MRFs model, Cauchy density is used for the gradient images. We provide a general Bayesian framework for the estimation of the parameters of this model. Due to the intractability of the problem we resort to numerical solutions for the joint maximization of the a posteriori distribution of the sources, the mixing matrix and the noise variances. For numerical solution, four different methods are proposed. In first method, the difficulty of working analytically with general Gibbs distributions of MRF is overcome by using an approximate density. In this approach, the Gibbs distribution is modelled by the product of directional Gaussians. The ...

Kayabol, Koray — Istanbul University


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Compressive Sensing of Cyclostationary Propeller Noise

This dissertation is the combination of three manuscripts –either published in or submitted to journals– on compressive sensing of propeller noise for detection, identification and localization of water crafts. Propeller noise, as a result of rotating blades, is broadband and radiates through water dominating underwater acoustic noise spectrum especially when cavitation develops. Propeller cavitation yields cyclostationary noise which can be modeled by amplitude modulation, i.e., the envelope-carrier product. The envelope consists of the so-called propeller tonals representing propeller characteristics which is used to identify water crafts whereas the carrier is a stationary broadband process. Sampling for propeller noise processing yields large data sizes due to Nyquist rate and multiple sensor deployment. A compressive sensing scheme is proposed for efficient sampling of second-order cyclostationary propeller noise since the spectral correlation function of the amplitude modulation model is sparse as shown in ...

Fırat, Umut — Istanbul Technical University


Analysis, Design, and Evaluation of Acoustic Feedback Cancellation Systems for Hearing Aids

Acoustic feedback problems occur when the output loudspeaker signal of an audio system is partly returned to the input microphone via an acoustic coupling through the air. This problem often causes significant performance degradations in applications such as public address systems and hearing aids. In the worst case, the audio system becomes unstable and howling occurs. In this work, first we analyze a general multiple microphone audio processing system, where a cancellation system using adaptive filters is used to cancel the effect of acoustic feedback. We introduce and derive an accurate approximation of a frequency domain measure—the power transfer function—and show how it can be used to predict system behaviors of the entire cancellation system across time and frequency without knowing the true acoustic feed-back paths. Furthermore, we consider the biased estimation problem, which is one of the most challenging ...

Guo, Meng — Aalborg University


Adaptive interference suppression algorithms for DS-UWB systems

In multiuser ultra-wideband (UWB) systems, a large number of multipath components (MPCs) are introduced by the channel. One of the main challenges for the receiver is to effectively suppress the interference with affordable complexity. In this thesis, we focus on the linear adaptive interference suppression algorithms for the direct-sequence ultrawideband (DS-UWB) systems in both time-domain and frequency-domain. In the time-domain, symbol by symbol transmission multiuser DS-UWB systems are considered. We first investigate a generic reduced-rank scheme based on the concept of joint and iterative optimization (JIO) that jointly optimizes a projection vector and a reduced-rank filter by using the minimum mean-squared error (MMSE) criterion. A low-complexity scheme, named Switched Approximations of Adaptive Basis Functions (SAABF), is proposed as a modification of the generic scheme, in which the complexity reduction is achieved by using a multi-branch framework to simplify the structure ...

Sheng Li — University of York


Statistical signal processing of spectrometric data: study of the pileup correction for energy spectra applied to Gamma spectrometry

The main objective of $\gamma$ spectrometry is to characterize the radioactive elements of an unknown source by studying the energy of the emitted $\gamma$ photons. When a photon interacts with a detector, its photonic energy is converted into an electrical pulse, whose integral energy is measured. The histogram obtained by collecting the energies can be used to identify radionucleides and measure their activity. However, at high counting rates, perturbations which are due to the stochastic aspect of the temporal signal can cripple the identification of the radioactive elements. More specifically, since the detector has a finite resolution, close arrival times of photons which can be modeled as an homogeneous Poisson process cause pileups of individual pulses. This phenomenon distorts energy spectra by introducing multiple fake spikes and prolonging artificially the Compton continuum, which can mask spikes of low intensity. The ...

Trigano, Thomas — Télécom Paris Tech


Compressed sensing and dimensionality reduction for unsupervised learning

This work aims at exploiting compressive sensing paradigms in order to reduce the cost of statistical learning tasks. We first provide a reminder of compressive sensing bases and describe some statistical analysis tasks using similar ideas. Then we describe a framework to perform parameter estimation on probabilistic mixture models in a case where training data is compressed to a fixed-size representation called a sketch. We formulate the estimation as a generalized inverse problem for which we propose a greedy algorithm. We experiment this framework and algorithm on an isotropic Gaussian mixture model. This proof of concept suggests the existence of theoretical recovery guarantees for sparse objects beyond the usual vector and matrix cases. We therefore study the generalization of linear inverse problems stability results on general signal models encompassing the standard cases and the sparse mixtures of probability distributions. We ...

Bourrier, Anthony — INRIA, Technicolor


Sensing physical fields: Inverse problems for the diffusion equation and beyond

Due to significant advances made over the last few decades in the areas of (wireless) networking, communications and microprocessor fabrication, the use of sensor networks to observe physical phenomena is rapidly becoming commonplace. Over this period, many aspects of sensor networks have been explored, yet a thorough understanding of how to analyse and process the vast amounts of sensor data collected remains an open area of research. This work, therefore, aims to provide theoretical, as well as practical, advances this area. In particular, we consider the problem of inferring certain underlying properties of the monitored phenomena, from our sensor measurements. Within mathematics, this is commonly formulated as an inverse problem; whereas in signal processing, it appears as a (multidimensional) sampling and reconstruction problem. Indeed it is well known that inverse problems are notoriously ill-posed and very demanding to solve; meanwhile ...

Murray-Bruce, John — Imperial College London


Robust Estimation and Model Order Selection for Signal Processing

In this thesis, advanced robust estimation methodologies for signal processing are developed and analyzed. The developed methodologies solve problems concerning multi-sensor data, robust model selection as well as robustness for dependent data. The work has been applied to solve practical signal processing problems in different areas of biomedical and array signal processing. In particular, for univariate independent data, a robust criterion is presented to select the model order with an application to corneal-height data modeling. The proposed criterion overcomes some limitations of existing robust criteria. For real-world data, it selects the radial model order of the Zernike polynomial of the corneal topography map in accordance with clinical expectations, even if the measurement conditions for the videokeratoscopy, which is the state-of-the-art method to collect corneal-height data, are poor. For multi-sensor data, robust model order selection selection criteria are proposed and applied ...

Muma, Michael — Technische Universität Darmstadt

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.