## Robust Algorithms for Linear and Nonlinear Regression via Sparse Modeling Methods: Theory, Algorithms and Applications to Image Denoising (2016)

Sparsity Models for Signals: Theory and Applications

Many signal and image processing applications have benefited remarkably from the theory of sparse representations. In its classical form this theory models signal as having a sparse representation under a given dictionary -- this is referred to as the "Synthesis Model". In this work we focus on greedy methods for the problem of recovering a signal from a set of deteriorated linear measurements. We consider four different sparsity frameworks that extend the aforementioned synthesis model: (i) The cosparse analysis model; (ii) the signal space paradigm; (iii) the transform domain strategy; and (iv) the sparse Poisson noise model. Our algorithms of interest in the first part of the work are the greedy-like schemes: CoSaMP, subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). It has been shown for the synthesis model that these can achieve a stable recovery ...

Giryes, Raja — Technion

Group-Sparse Regression - With Applications in Spectral Analysis and Audio Signal Processing

This doctorate thesis focuses on sparse regression, a statistical modeling tool for selecting valuable predictors in underdetermined linear models. By imposing different constraints on the structure of the variable vector in the regression problem, one obtains estimates which have sparse supports, i.e., where only a few of the elements in the response variable have non-zero values. The thesis collects six papers which, to a varying extent, deals with the applications, implementations, modifications, translations, and other analysis of such problems. Sparse regression is often used to approximate additive models with intricate, non-linear, non-smooth or otherwise problematic functions, by creating an underdetermined model consisting of candidate values for these functions, and linear response variables which selects among the candidates. Sparse regression is therefore a widely used tool in applications such as, e.g., image processing, audio processing, seismological and biomedical modeling, but is ...

Kronvall, Ted — Lund University

Bayesian methods for sparse and low-rank matrix problems

Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information, it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type of modeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems. Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector ...

Sundin, Martin — Department of Signal Processing, Royal Institute of Technology KTH

Stability of Coupled Adaptive Filters

Nowadays, many disciplines in science and engineering deal with problems for which a solution relies on knowledge about the characteristics of one or more given systems that can only be ascertained based on restricted observations. This requires the fitting of an adequately chosen model, such that it “best” conforms to a set of measured data. Depending on the context, this fitting procedure may resort to a huge amount of recorded data and abundant numerical power, or contrarily, to only a few streams of samples, which have to be processed on the fly at low computational cost. This thesis, exclusively focuses on the latter scenario. It specifically studies unexpected behaviour and reliability of the widely spread and computationally highly efficient class of gradient type algorithms. Additionally, special attention is paid to systems that combine several of them. Chapter 3 is dedicated ...

Dallinger, Robert — TU Wien

Search-Based Methods for the Sparse Signal Recovery Problem in Compressed Sensing

The sparse signal recovery, which appears not only in compressed sensing but also in other related problems such as sparse overcomplete representations, denoising, sparse learning, etc. has drawn a large attraction in the last decade. The literature contains a vast number of recovery methods, which have been analysed in theoretical and empirical aspects. This dissertation presents novel search-based sparse signal recovery methods. First, we discuss theoretical analysis of the orthogonal matching pursuit algorithm with more iterations than the number of nonzero elements of the underlying sparse signal. Second, best-fi rst tree search is incorporated for sparse recovery by a novel method, whose tractability follows from the properly de fined cost models and pruning techniques. The proposed method is evaluated by both theoretical and empirical analyses, which clearly emphasize the improvements in the recovery accuracy. Next, we introduce an iterative two ...

Karahanoglu, Nazim Burak — Sabanci University

Compressed sensing and dimensionality reduction for unsupervised learning

This work aims at exploiting compressive sensing paradigms in order to reduce the cost of statistical learning tasks. We first provide a reminder of compressive sensing bases and describe some statistical analysis tasks using similar ideas. Then we describe a framework to perform parameter estimation on probabilistic mixture models in a case where training data is compressed to a fixed-size representation called a sketch. We formulate the estimation as a generalized inverse problem for which we propose a greedy algorithm. We experiment this framework and algorithm on an isotropic Gaussian mixture model. This proof of concept suggests the existence of theoretical recovery guarantees for sparse objects beyond the usual vector and matrix cases. We therefore study the generalization of linear inverse problems stability results on general signal models encompassing the standard cases and the sparse mixtures of probability distributions. We ...

Bourrier, Anthony — INRIA, Technicolor

In this thesis, we present a convex optimization approach to address three problems arising in multicomponent image recovery, supervised classification, and image forgery detection. The common thread among these problems is the presence of nonlinear convex constraints difficult to handle with state-of-the-art methods. Therefore, we present a novel splitting technique to simplify the management of such constraints. Relying on this approach, we also propose some contributions that are tailored to the aforementioned applications. The first part of the thesis presents the epigraphical splitting of nonlinear convex constraints. The principle is to decompose the sublevel set of a block-separable function into a collection of epigraphs. So doing, we reduce the complexity of optimization algorithms when the above constraint involves the sum of absolute values, distance functions to a convex set, Euclidean norms, infinity norms, or max functions. We demonstrate through numerical ...

Chierchia, Giovanni — Telecom ParisTech

Multimedia consumer electronics are nowadays everywhere from teleconferencing, hands-free communications, in-car communications to smart TV applications and more. We are living in a world of telecommunication where ideal scenarios for implementing these applications are hard to find. Instead, practical implementations typically bring many problems associated to each real-life scenario. This thesis mainly focuses on two of these problems, namely, acoustic echo and acoustic feedback. On the one hand, acoustic echo cancellation (AEC) is widely used in mobile and hands-free telephony where the existence of echoes degrades the intelligibility and listening comfort. On the other hand, acoustic feedback limits the maximum amplification that can be applied in, e.g., in-car communications or in conferencing systems, before howling due to instability, appears. Even though AEC and acoustic feedback cancellation (AFC) are functional in many applications, there are still open issues. This means that ...

Gil-Cacho, Jose Manuel — KU Leuven

Magnetic Resonance Spectroscopy (MRS) is a technique which has evolved rapidly over the past 15 years. It has been used specifically in the context of brain tumours and has shown very encouraging correlations between brain tumour type and spectral pattern. In vivo MRS enables the quantification of metabolite concentrations non-invasively, thereby avoiding serious risks to brain damage. While Magnetic Resonance Imaging (MRI) is commonly used for identifying the location and size of brain tumours, MRS complements it with the potential to provide detailed chemical information about metabolites present in the brain tissue and enable an early detection of abnormality. However, the introduction of MRS in clinical medicine has been difficult due to problems associated with the acquisition of in vivo MRS signals from living tissues at low magnetic fields acceptable for patients. The low signal-to-noise ratio makes accurate analysis of ...

Lukas, Lukas — Katholieke Universiteit Leuven

Sparse Sensing for Statistical Inference: Theory, Algorithms, and Applications

In today's society, we are flooded with massive volumes of data in the order of a billion gigabytes on a daily basis from pervasive sensors. It is becoming increasingly challenging to locally store and transport the acquired data to a central location for signal/data processing (i.e., for inference). To alleviate these problems, it is evident that there is an urgent need to significantly reduce the sensing cost (i.e., the number of expensive sensors) as well as the related memory and bandwidth requirements by developing unconventional sensing mechanisms to extract as much information as possible yet collecting fewer data. The first aim of this thesis is to develop theory and algorithms for data reduction. We develop a data reduction tool called sparse sensing, which consists of a deterministic and structured sensing function (guided by a sparse vector) that is optimally designed ...

Chepuri, Sundeep Prabhakar — Delft University of Technology

Robust Methods for Sensing and Reconstructing Sparse Signals

Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...

Carrillo, Rafael — University of Delaware

Contributions to signal analysis and processing using compressed sensing techniques

Chapter 2 contains a short introduction to the fundamentals of compressed sensing theory, which is the larger context of this thesis. We start with introducing the key concepts of sparsity and sparse representations of signals. We discuss the central problem of compressed sensing, i.e. how to adequately recover sparse signals from a small number of measurements, as well as the multiple formulations of the reconstruction problem. A large part of the chapter is devoted to some of the most important conditions necessary and/or sufficient to guarantee accurate recovery. The aim is to introduce the reader to the basic results, without the burden of detailed proofs. In addition, we also present a few of the popular reconstruction and optimization algorithms that we use throughout the thesis. Chapter 3 presents an alternative sparsity model known as analysis sparsity, that offers similar recovery ...

Cleju, Nicolae — "Gheorghe Asachi" Technical University of Iasi

Kernel PCA and Pre-Image Iterations for Speech Enhancement

In this thesis, we present novel methods to enhance speech corrupted by noise. All methods are based on the processing of complex-valued spectral data. First, kernel principal component analysis (PCA) for speech enhancement is proposed. Subsequently, a simplification of kernel PCA, called pre-image iterations (PI), is derived. This method computes enhanced feature vectors iteratively by linear combination of noisy feature vectors. The weighting for the linear combination is found by a kernel function that measures the similarity between the feature vectors. The kernel variance is a key parameter for the degree of de-noising and has to be set according to the signal-to-noise ratio (SNR). Initially, PI were proposed for speech corrupted by additive white Gaussian noise. To be independent of knowledge about the SNR and to generalize to other stationary noise types, PI are extended by automatic determination of the ...

Leitner, Christina — Graz University of Technology

Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete

Phase Noise and Wideband Transmission in Massive MIMO

In the last decades the world has experienced a massive growth in the demand for wireless services. The recent popularity of hand-held devices with data exchange capabilities over wireless networks, such as smartphones and tablets, increased the wireless data traffic even further. This trend is not expected to cease in the foreseeable future. In fact, it is expected to accelerate as everyday apparatus unrelated with data communications, such as vehicles or household devices, are foreseen to be equipped with wireless communication capabilities. Further, the next generation wireless networks should be designed such that they have increased spectral and energy efficiency, provide uniformly good service to all of the accommodated users and handle many more devices simultaneously. Massive multiple-input multiple-output (Massive MIMO) systems, also termed as large-scale MIMO, very large MIMO or full-dimension MIMO, have recently been proposed as a candidate ...

Pitarokoilis, Antonios — Linköping University

The current layout is optimized for **mobile
phones**. Page previews, thumbnails, and full abstracts
will remain hidden until the browser window grows in width.

The current layout is optimized for **tablet
devices**. Page previews and some thumbnails will remain
hidden until the browser window grows in width.