Generalized Consistent Estimation in Arbitrarily High Dimensional Signal Processing

The theory of statistical signal processing finds a wide variety of applications in the fields of data communications, such as in channel estimation, equalization and symbol detection, and sensor array processing, as in beamforming, and radar systems. Indeed, a large number of these applications can be interpreted in terms of a parametric estimation problem, typically approached by a linear filtering operation acting upon a set of multidimensional observations. Moreover, in many cases, the underlying structure of the observable signals is linear in the parameter to be inferred. This dissertation is devoted to the design and evaluation of statistical signal processing methods under realistic implementation conditions encountered in practice. Traditional statistical signal processing techniques intrinsically provide a good performance under the availability of a particularly high number of observations of fixed dimension. Indeed, the original optimality conditions cannot be theoretically guaranteed ...

Rubio, Francisco — Universitat Politecnica de Catalunya


Development of Fast Machine Learning Algorithms for False Discovery Rate Control in Large-Scale High-Dimensional Data

This dissertation develops false discovery rate (FDR) controlling machine learning algorithms for large-scale high-dimensional data. Ensuring the reproducibility of discoveries based on high-dimensional data is pivotal in numerous applications. The developed algorithms perform fast variable selection tasks in large-scale high-dimensional settings where the number of variables may be much larger than the number of samples. This includes large-scale data with up to millions of variables such as genome-wide association studies (GWAS). Theoretical finite sample FDR-control guarantees based on martingale theory have been established proving the trustworthiness of the developed methods. The practical open-source R software packages TRexSelector and tlars, which implement the proposed algorithms, have been published on the Comprehensive R Archive Network (CRAN). Extensive numerical experiments and real-world problems in biomedical and financial engineering demonstrate the performance in challenging use-cases. The first three main parts of this dissertation present ...

Machkour, Jasin — Technische Universität Darmstadt


Group-Sparse Regression - With Applications in Spectral Analysis and Audio Signal Processing

This doctorate thesis focuses on sparse regression, a statistical modeling tool for selecting valuable predictors in underdetermined linear models. By imposing different constraints on the structure of the variable vector in the regression problem, one obtains estimates which have sparse supports, i.e., where only a few of the elements in the response variable have non-zero values. The thesis collects six papers which, to a varying extent, deals with the applications, implementations, modifications, translations, and other analysis of such problems. Sparse regression is often used to approximate additive models with intricate, non-linear, non-smooth or otherwise problematic functions, by creating an underdetermined model consisting of candidate values for these functions, and linear response variables which selects among the candidates. Sparse regression is therefore a widely used tool in applications such as, e.g., image processing, audio processing, seismological and biomedical modeling, but is ...

Kronvall, Ted — Lund University


Contributions to signal analysis and processing using compressed sensing techniques

Chapter 2 contains a short introduction to the fundamentals of compressed sensing theory, which is the larger context of this thesis. We start with introducing the key concepts of sparsity and sparse representations of signals. We discuss the central problem of compressed sensing, i.e. how to adequately recover sparse signals from a small number of measurements, as well as the multiple formulations of the reconstruction problem. A large part of the chapter is devoted to some of the most important conditions necessary and/or sufficient to guarantee accurate recovery. The aim is to introduce the reader to the basic results, without the burden of detailed proofs. In addition, we also present a few of the popular reconstruction and optimization algorithms that we use throughout the thesis. Chapter 3 presents an alternative sparsity model known as analysis sparsity, that offers similar recovery ...

Cleju, Nicolae — "Gheorghe Asachi" Technical University of Iasi


Robust Estimation and Model Order Selection for Signal Processing

In this thesis, advanced robust estimation methodologies for signal processing are developed and analyzed. The developed methodologies solve problems concerning multi-sensor data, robust model selection as well as robustness for dependent data. The work has been applied to solve practical signal processing problems in different areas of biomedical and array signal processing. In particular, for univariate independent data, a robust criterion is presented to select the model order with an application to corneal-height data modeling. The proposed criterion overcomes some limitations of existing robust criteria. For real-world data, it selects the radial model order of the Zernike polynomial of the corneal topography map in accordance with clinical expectations, even if the measurement conditions for the videokeratoscopy, which is the state-of-the-art method to collect corneal-height data, are poor. For multi-sensor data, robust model order selection selection criteria are proposed and applied ...

Muma, Michael — Technische Universität Darmstadt


Measurement Methods for Estimating the Error Vector Magnitude in OFDM Transceivers

The error vector magnitude (EVM) is a standard metric to quantify the performance of digital communication systems and related building blocks. Regular EVM measurements require expensive equipment featuring inphase and quadrature (IQ) demodulation, wideband analog-to-digital converters (ADCs), and dedicated receiver algorithms to demodulate the data symbols. With modern high data rate communication standards that require high bandwidths and low amounts of error, it is difficult to avoid bias due to errors in the measurement chain. This thesis develops and discusses measurement methods that address the above-described issues with EVM measurements. The first method is an extension of the regular EVM, yielding two results from a single measurement. One result equals the regular EVM result, whereas the other excludes potential errors due to mismatches of the I- and Q- paths of direct conversion transmitters and receivers (IQ imbalance). This can be ...

Freiberger, Karl — Graz University of Technology


Least squares support vector machines classification applied to brain tumour recognition using magnetic resonance spectroscopy

Magnetic Resonance Spectroscopy (MRS) is a technique which has evolved rapidly over the past 15 years. It has been used specifically in the context of brain tumours and has shown very encouraging correlations between brain tumour type and spectral pattern. In vivo MRS enables the quantification of metabolite concentrations non-invasively, thereby avoiding serious risks to brain damage. While Magnetic Resonance Imaging (MRI) is commonly used for identifying the location and size of brain tumours, MRS complements it with the potential to provide detailed chemical information about metabolites present in the brain tissue and enable an early detection of abnormality. However, the introduction of MRS in clinical medicine has been difficult due to problems associated with the acquisition of in vivo MRS signals from living tissues at low magnetic fields acceptable for patients. The low signal-to-noise ratio makes accurate analysis of ...

Lukas, Lukas — Katholieke Universiteit Leuven


Spectral Variability in Hyperspectral Unmixing: Multiscale, Tensor, and Neural Network-based Approaches

The spectral signatures of the materials contained in hyperspectral images, also called endmembers (EMs), can be significantly affected by variations in atmospheric, illumination or environmental conditions typically occurring within an image. Traditional spectral unmixing (SU) algorithms neglect the spectral variability of the endmembers, what propagates significant mismodeling errors throughout the whole unmixing process and compromises the quality of the estimated abundances. Therefore, significant effort have been recently dedicated to mitigate the effects of spectral variability in SU. However, many challenges still remain in how to best explore a priori information about the problem in order to improve the quality, the robustness and the efficiency of SU algorithms that account for spectral variability. In this thesis, new strategies are developed to address spectral variability in SU. First, an (over)-segmentation-based multiscale regularization strategy is proposed to explore spatial information about the abundance ...

Borsoi, Ricardo Augusto — Université Côte d'Azur; Federal University of Santa Catarina


Bayesian Fusion of Multi-band Images: A Powerful Tool for Super-resolution

Hyperspectral (HS) imaging, which consists of acquiring a same scene in several hundreds of contiguous spectral bands (a three dimensional data cube), has opened a new range of relevant applications, such as target detection [MS02], classification [C.-03] and spectral unmixing [BDPD+12]. However, while HS sensors provide abundant spectral information, their spatial resolution is generally more limited. Thus, fusing the HS image with other highly resolved images of the same scene, such as multispectral (MS) or panchromatic (PAN) images is an interesting problem. The problem of fusing a high spectral and low spatial resolution image with an auxiliary image of higher spatial but lower spectral resolution, also known as multi-resolution image fusion, has been explored for many years [AMV+11]. From an application point of view, this problem is also important as motivated by recent national programs, e.g., the Japanese next-generation space-borne ...

Wei, Qi — University of Toulouse


Adaptive filtering algorithms for acoustic echo cancellation and acoustic feedback control in speech communication applications

Multimedia consumer electronics are nowadays everywhere from teleconferencing, hands-free communications, in-car communications to smart TV applications and more. We are living in a world of telecommunication where ideal scenarios for implementing these applications are hard to find. Instead, practical implementations typically bring many problems associated to each real-life scenario. This thesis mainly focuses on two of these problems, namely, acoustic echo and acoustic feedback. On the one hand, acoustic echo cancellation (AEC) is widely used in mobile and hands-free telephony where the existence of echoes degrades the intelligibility and listening comfort. On the other hand, acoustic feedback limits the maximum amplification that can be applied in, e.g., in-car communications or in conferencing systems, before howling due to instability, appears. Even though AEC and acoustic feedback cancellation (AFC) are functional in many applications, there are still open issues. This means that ...

Gil-Cacho, Jose Manuel — KU Leuven


Non-Coherent Communication in Multiple-Antenna Systems: Receiver Design, Codebook Construction and Capacity Analysis

The thesis addresses the problem of space-time codebook design for communication in multiple-input multiple-output (MIMO) wireless systems. The realistic and challenging non-coherent setup (channel state information is absent at the receiver) is considered. A generalized likelihood ratio test (GLRT)-like detector is assumed at the receiver and contrary to most existing approaches, an arbitrary correlation structure is allowed for the additive Gaussian observation noise. A theoretical analysis of the probability of error is derived, for both the high and low signal-to-noise ratio (SNR) regimes. This leads to a codebook design criterion which shows that optimal codebooks correspond to optimal packings in a Cartesian product of projective spaces. The actual construction of the codebooks involves solving a high-dimensional, nonlinear, nonsmooth optimization problem which is tackled here in two phases: a convex semi-definite programming (SDP) relaxation furnishes an initial point which is then ...

Beko, Marko — IST, Lisbon


Sketching for Large-Scale Learning of Mixture Models

Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond ...

Keriven, Nicolas — IRISA, Rennes, France


Statistical methods using hydrodynamic simulations of stellar atmospheres for detecting exoplanets in radial velocity data

When the noise affecting time series is colored with unknown statistics, a difficulty for periodic signal detection is to control the true significance level at which the detection tests are conducted. This thesis investigates the possibility of using training datasets of the noise to improve this control. Specifically, for the case of regularly sampled observations, we analyze the performances of various detectors applied to periodograms standardized using the noise training datasets. Emphasis is put on sparse detection in the Fourier domain and on the limitation posed by the necessary finite size of the training sets available in practice. We study the resulting false alarm and detection rates and show that the proposed standardization leads, in some cases, to powerful constant false alarm rate tests. Although analytical results are derived in an asymptotic regime, numerical results show that the theory accurately ...

Sulis Sophia — Université Côte d’Azur


Bayesian resolution of the non linear inverse problem of Electrical Impedance Tomography with Finite Element modeling

Resistivity distribution estimation, widely known as Electrical Impedance Tomography (EIT), is a non linear ill-posed inverse problem. However, the partial derivative equation ruling this experiment yields no analytical solution for arbitrary conductivity distribution. Thus, solving the forward problem requires an approximation. The Finite Element Method (FEM) provides us with a computationally cheap forward model which preserves the non linear image-data relation and also reveals sufficiently accurate for the inversion. Within the Bayesian approach, Markovian priors on the log-conductivity distribution are introduced for regularization. The neighborhood system is directly derived from the FEM triangular mesh structure. We first propose a maximum a posteriori (MAP) estimation with a Huber-Markov prior which favours smooth distributions while preserving locally discontinuous features. The resulting criterion is minimized with the pseudo-conjugate gradient method. Simulation results reveal significant improvements in terms of robustness to noise, computation rapidity ...

Martin, Thierry — Laboratoire des signaux et systèmes


Modeling of Magnetic Fields and Extended Objects for Localization Applications

The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed. In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information. Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This ...

Wahlström, Niklas — Linköping University

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.