Exploiting Prior Information in Parametric Estimation Problems for Multi-Channel Signal Processing Applications

This thesis addresses a number of problems all related to parameter estimation in sensor array processing. The unifying theme is that some of these parameters are known before the measurements are acquired. We thus study how to improve the estimation of the unknown parameters by incorporating the knowledge of the known parameters; exploiting this knowledge successfully has the potential to dramatically improve the accuracy of the estimates. For covariance matrix estimation, we exploit that the true covariance matrix is Kronecker and Toeplitz structured. We then devise a method to ascertain that the estimates possess this structure. Additionally, we can show that our proposed estimator has better performance than the state-of-art when the number of samples is low, and that it is also efficient in the sense that the estimates have Cramér-Rao lower Bound (CRB) equivalent variance. In the direction of ...

Wirfält, Petter — KTH Royal Institute of Technology


Subspace-based quantification of magnetic resonance spectroscopy data using biochemical prior knowledge

Nowadays, Nuclear Magnetic Resonance (NMR) is widely used in oncology as a non-invasive diagnostic tool in order to detect the presence of tumor regions in the human body. An application of NMR is Magnetic Resonance Imaging, which is applied in routine clinical practice to localize tumors and determine their size. Magnetic Resonance Imaging is able to provide an initial diagnosis, but its ability to delineate anatomical and pathological information is significantly improved by its combination with another NMR application, namely Magnetic Resonance Spectroscopy. The latter reveals information on the biochemical profile tissues, thereby allowing clinicians and radiologists to identify in a non{invasive way the different tissue types characterizing the sample under investigation, and to study the biochemical changes underlying a pathological situation. In particular, an NMR application exists which provides spatial as well as biochemical information. This application is called ...

Laudadio, Teresa — Katholieke Universiteit Leuven


Parameter Estimation and Filtering Using Sparse Modeling

Sparsity-based estimation techniques deal with the problem of retrieving a data vector from an undercomplete set of linear observations, when the data vector is known to have few nonzero elements with unknown positions. It is also known as the atomic decomposition problem, and has been carefully studied in the field of compressed sensing. Recent findings have led to a method called basis pursuit, also known as Least Absolute Shrinkage and Selection Operator (LASSO), as a numerically reliable sparsity-based approach. Although the atomic decomposition problem is generally NP-hard, it has been shown that basis pursuit may provide exact solutions under certain assumptions. This has led to an extensive study of signals with sparse representation in different domains, providing a new general insight into signal processing. This thesis further investigates the role of sparsity-based techniques, especially basis pursuit, for solving parameter estimation ...

Panahi, Ashkan — Chalmers University of Technology


New Higher-Order Active Contour Models, Shape Priors, and Multiscale Analysis - Their Application To Road Network Extraction From Very High Resolution Satelite Images

The objective of this thesis is to develop and validate robust approaches for the semi-automatic extraction of road networks in dense urban areas from very high resolution (VHR) optical satellite images. Our models are based on the recently developed higher-order active contour (HOAC) phase field framework. The problem is difficult for two main reasons: VHR images are intrinsically complex and network regions may have arbitrary topology. To tackle the complexity of the information contained in VHR images, we propose a multiresolution statistical data model and a multiresolution constrained prior model. They enable the integration of segmentation results from coarse resolution and fine resolution. Subsequently, for the particular case of road map updating, we present a specific shape prior model derived from an outdated GIS digital map. This specific prior term balances the effect of the generic prior knowledge carried by ...

Peng, Ting — Project-Team Ariana (INRIA-Sophia Antipolis, France); LIAMA (CASIA, China)


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Robust Methods for Sensing and Reconstructing Sparse Signals

Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...

Carrillo, Rafael — University of Delaware


Bayesian methods for sparse and low-rank matrix problems

Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information, it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type of modeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems. Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector ...

Sundin, Martin — Department of Signal Processing, Royal Institute of Technology KTH


Parallel Magnetic Resonance Imaging reconstruction problems using wavelet representations

To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90’s as powerful methods. In these techniques, MRI images have to be reconstructed from ac- quired undersampled “k-space” data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the ob- served data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve ac- curate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on ...

Lotfi CHAARI — University Paris-Est


Learning Transferable Knowledge through Embedding Spaces

The unprecedented processing demand, posed by the explosion of big data, challenges researchers to design efficient and adaptive machine learning algorithms that do not require persistent retraining and avoid learning redundant information. Inspired from learning techniques of intelligent biological agents, identifying transferable knowledge across learning problems has been a significant research focus to improve machine learning algorithms. In this thesis, we address the challenges of knowledge transfer through embedding spaces that capture and store hierarchical knowledge. In the first part of the thesis, we focus on the problem of cross-domain knowledge transfer. We first address zero-shot image classification, where the goal is to identify images from unseen classes using semantic descriptions of these classes. We train two coupled dictionaries which align visual and semantic domains via an intermediate embedding space. We then extend this idea by training deep networks that ...

Mohammad Rostami — University of Pennsylvania


Estima e Igualacion Ciega de Canales MIMO con y sin Redudancia Espacial (title in Spanish)

The majority of communication systems need the previous knowledge of the channel, which is usually estimated by means of a training sequence. However, the transmission of pilot symbols provokes a reduction in bandwidth efficiency, which precludes the system from reaching the limits predicted by the Information Theory. This problem has motivated the development of a large number of blind channel estimation and equalization techniques, which are able to obtain the channel or the source without the need of transmitting a training signal. Usually, these techniques are based on the previous knowledge of certain properties of the signal, such as its belonging to a finite alphabet, or its higher-order statistics. However, in the case of multiple-input multipleoutput (MIMO) systems, it has been proven that the second-order statistics of the observations provide the sufficient information for solving the blind problem. The aim ...

Rodriguez, Javier Via — Universidad de Cantabria


Gaussian Process Modelling for Audio Signals

Audio signals are characterised and perceived based on how their spectral make-up changes with time. Uncovering the behaviour of latent spectral components is at the heart of many real-world applications involving sound, but is a highly ill-posed task given the infinite number of ways any signal can be decomposed. This motivates the use of prior knowledge and a probabilistic modelling paradigm that can characterise uncertainty. This thesis studies the application of Gaussian processes to audio, which offer a principled non-parametric way to specify probability distributions over functions whilst also encoding prior knowledge. Along the way we consider what prior knowledge we have about sound, the way it behaves, and the way it is perceived, and write down these assumptions in the form of probabilistic models. We show how Bayesian time-frequency analysis can be reformulated as a spectral mixture Gaussian process, ...

William Wilkinson — Queen Mary University of London


Advances in graph signal processing: Graph filtering and network identification

To the surprise of most of us, complexity in nature spawns from simplicity. No matter how simple a basic unit is, when many of them work together, the interactions among these units lead to complexity. This complexity is present in the spreading of diseases, where slightly different policies, or conditions,might lead to very different results; or in biological systems where the interactions between elements maintain the delicate balance that keep life running. Fortunately, despite their complexity, current advances in technology have allowed us to have more than just a sneak-peak at these systems. With new views on how to observe such systems and gather data, we aimto understand the complexity within. One of these new views comes from the field of graph signal processing which provides models and tools to understand and process data coming from such complex systems. With ...

Coutino, Mario — Delft University of Technology


Analysis and improvement of quantification algorithms for magnetic resonance spectroscopy

Magnetic Resonance Spectroscopy (MRS) is a technique used in fundamental research and in clinical environments. During recent years, clinical application of MRS gained importance, especially as a non-invasive tool for diagnosis and therapy monitoring of brain and prostate tumours. The most important asset of MRS is its ability to determine the concentration of chemical substances non-invasively. To extract relevant signal parameters, MRS data have to be quantified. This usually doesn¢t prove to be straightforward since in vivo MRS signals are characterized by poor signal-to-noise ratios, overlapping peaks, acquisition related artefacts and the presence of disturbing components (e.g. residual water in proton spectra). The work presented in this thesis aims to improve the quantification in different applications of MRS in vivo. To obtain the signal parameters related to MRS data, different approaches were suggested in the past. Black-box methods, don¢t require ...

Pels, Pieter — Katholieke Universiteit Leuven


Explicit and implicit tensor decomposition-based algorithms and applications

Various real-life data such as time series and multi-sensor recordings can be represented by vectors and matrices, which are one-way and two-way arrays of numerical values, respectively. Valuable information can be extracted from these measured data matrices by means of matrix factorizations in a broad range of applications within signal processing, data mining, and machine learning. While matrix-based methods are powerful and well-known tools for various applications, they are limited to single-mode variations, making them ill-suited to tackle multi-way data without loss of information. Higher-order tensors are a natural extension of vectors (first order) and matrices (second order), enabling us to represent multi-way arrays of numerical values, which have become ubiquitous in signal processing and data mining applications. By leveraging the powerful utitilies offered by tensor decompositions such as compression and uniqueness properties, we can extract more information from multi-way ...

Boussé, Martijn — KU Leuven


Inferring Room Geometries

Determining the geometry of an acoustic enclosure using microphone arrays has become an active area of research. Knowledge gained about the acoustic environment, such as the location of reflectors, can be advantageous for applications such as sound source localization, dereverberation and adaptive echo cancellation by assisting in tracking environment changes and helping the initialization of such algorithms. A methodology to blindly infer the geometry of an acoustic enclosure by estimating the location of reflective surfaces based on acoustic measurements using an arbitrary array geometry is developed and analyzed. The starting point of this work considers a geometric constraint, valid both in two and three-dimensions, that converts time-of-arrival and time-difference-of-arrival information into elliptical constraints about the location of reflectors. Multiple constraints are combined to yield the line or plane parameters of the reflectors by minimizing a specific cost function in the ...

Filos, Jason — Imperial College London

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.