Bayesian Compressed Sensing using Alpha-Stable Distributions (2009)
Robust Methods for Sensing and Reconstructing Sparse Signals
Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...
Carrillo, Rafael — University of Delaware
Signal Processing In Stable Noise Environments: A Least lp Norm Approach
This dissertation is concerned with the development of new optimal techniques for the solution of signal processing problems involving impulsive data. Although the signal processing and communications field has been dominated by the Gaussian distribution, it has been common knowledge that atmospheric noise, underwater acoustic noise, electro-magnetic disturbances on telephone lines and nancial time series showed an impulsive character which cannot be described by a Gaussian distribution. Recently, there has been great interest in the alpha-stable distribution. This thesis, in agreement with some of the recent work defends the alpha-stable model for impulsive data. Justications for the alpha-stable model are given and various analytical properties of these distributions are discussed. This discussion leads us to the minimum dispersion criterion which is the analogue of the minimum mean squared error criterion for alpha-stable distributed data. Based on the minimum dispersion criterion, ...
Kuruoglu, Ercan Engin — University of Cambridge
Compressive Sensing of Cyclostationary Propeller Noise
This dissertation is the combination of three manuscripts –either published in or submitted to journals– on compressive sensing of propeller noise for detection, identification and localization of water crafts. Propeller noise, as a result of rotating blades, is broadband and radiates through water dominating underwater acoustic noise spectrum especially when cavitation develops. Propeller cavitation yields cyclostationary noise which can be modeled by amplitude modulation, i.e., the envelope-carrier product. The envelope consists of the so-called propeller tonals representing propeller characteristics which is used to identify water crafts whereas the carrier is a stationary broadband process. Sampling for propeller noise processing yields large data sizes due to Nyquist rate and multiple sensor deployment. A compressive sensing scheme is proposed for efficient sampling of second-order cyclostationary propeller noise since the spectral correlation function of the amplitude modulation model is sparse as shown in ...
Fırat, Umut — Istanbul Technical University
Bayesian methods for sparse and low-rank matrix problems
Many scientific and engineering problems require us to process measurements and data in order to extract information. Since we base decisions on information, it is important to design accurate and efficient processing algorithms. This is often done by modeling the signal of interest and the noise in the problem. One type of modeling is Compressed Sensing, where the signal has a sparse or low-rank representation. In this thesis we study different approaches to designing algorithms for sparse and low-rank problems. Greedy methods are fast methods for sparse problems which iteratively detects and estimates the non-zero components. By modeling the detection problem as an array processing problem and a Bayesian filtering problem, we improve the detection accuracy. Bayesian methods approximate the sparsity by probability distributions which are iteratively modified. We show one approach to making the Bayesian method the Relevance Vector ...
Sundin, Martin — Department of Signal Processing, Royal Institute of Technology KTH
Generalised Bayesian Model Selection Using Reversible Jump Markov Chain Monte Carlo
The main objective of this thesis is to suggest a general Bayesian framework for model selection based on the reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. In particular, we aim to reveal the undiscovered potentials of RJMCMC in model selection applications by exploiting the original formulation to explore spaces of di erent classes or structures and thus, to show that RJMCMC o ers a wider interpretation than just being a trans-dimensional model selection algorithm. The general practice is to use RJMCMC in a trans-dimensional framework e.g. in model estimation studies of linear time series, such as AR and ARMA and mixture processes, etc. In this thesis, we propose a new interpretation on RJMCMC which reveals the undiscovered potentials of the algorithm. This new interpretation, firstly, extends the classical trans-dimensional approach to a much wider meaning by exploring the spaces ...
Karakus, Oktay — Izmir Institute of Technology
Compressed Sensing: Novel Applications, Challenges, and Techniques
Compressed Sensing (CS) is a widely used technique for efficient signal acquisition, in which a very small number of (possibly noisy) linear measurements of an unknown signal vector are taken via multiplication with a designed ‘sensing matrix’ in an application-specific manner, and later recovered by exploiting the sparsity of the signal vector in some known orthonormal basis and some special properties of the sensing matrix which allow for such recovery. We study three new applications of CS, each of which poses a unique challenge in a different aspect of it, and propose novel techniques to solve them, advancing the field of CS. Each application involves a unique combination of realistic assumptions on the measurement noise model and the signal, and a unique set of algorithmic challenges. We frame Pooled RT-PCR Testing for COVID-19 – wherein RT-PCR (Reverse Transcription Polymerase Chain ...
Ghosh, Sabyasachi — Department of Computer Science and Engineering, Indian Institute of Technology Bombay
Linear Dynamical Systems with Sparsity Constraints: Theory and Algorithms
This thesis develops new mathematical theory and presents novel recovery algorithms for discrete linear dynamical systems (LDS) with sparsity constraints on either control inputs or initial state. The recovery problems in this framework manifest as the problem of reconstructing one or more sparse signals from a set of noisy underdetermined linear measurements. The goal of our work is to design algorithms for sparse signal recovery which can exploit the underlying structure in the measurement matrix and the unknown sparse vectors, and to analyze the impact of these structures on the efficacy of the recovery. We answer three fundamental and interconnected questions on sparse signal recovery problems that arise in the context of LDS. First, what are necessary and sufficient conditions for the existence of a sparse solution? Second, given that a sparse solution exists, what are good low-complexity algorithms that ...
Joseph, Geethu — Indian Institute of Science, Bangalore
Parameter Estimation and Filtering Using Sparse Modeling
Sparsity-based estimation techniques deal with the problem of retrieving a data vector from an undercomplete set of linear observations, when the data vector is known to have few nonzero elements with unknown positions. It is also known as the atomic decomposition problem, and has been carefully studied in the field of compressed sensing. Recent findings have led to a method called basis pursuit, also known as Least Absolute Shrinkage and Selection Operator (LASSO), as a numerically reliable sparsity-based approach. Although the atomic decomposition problem is generally NP-hard, it has been shown that basis pursuit may provide exact solutions under certain assumptions. This has led to an extensive study of signals with sparse representation in different domains, providing a new general insight into signal processing. This thesis further investigates the role of sparsity-based techniques, especially basis pursuit, for solving parameter estimation ...
Panahi, Ashkan — Chalmers University of Technology
Sparse Signal Recovery From Incomplete And Perturbed Data
Sparse signal recovery consists of algorithms that are able to recover undersampled high dimensional signals accurately. These algorithms require fewer measurements than traditional Shannon/Nyquist sampling theorem demands. Sparse signal recovery has found many applications including magnetic resonance imaging, electromagnetic inverse scattering, radar/sonar imaging, seismic data collection, sensor array processing and channel estimation. The focus of this thesis is on electromagentic inverse scattering problem and joint estimation of the frequency offset and the channel impulse response in OFDM. In the electromagnetic inverse scattering problem, the aim is to find the electromagnetic properties of unknown targets from measured scattered field. The reconstruction of closely placed point-like objects is investigated. The application of the greedy pursuit based sparse recovery methods, OMP and FTB-OMP, is proposed for increasing the reconstruction resolution. The performances of the proposed methods are compared against NESTA and MT-BCS methods. ...
Senyuva, Rifat Volkan — Bogazici University
MIMO Radars with Sparse Sensing
Multi-input and multi-output (MIMO) radars achieve high resolution of arrival direction by transmitting orthogonal waveforms, performing matched filtering at the receiver end and then jointly processing the measurements of all receive antennas. This dissertation studies the use of compressive sensing (CS) and matrix completion (MC) techniques as means of reducing the amount of data that need to be collected by a MIMO radar system, without sacrificing the system’s good resolution properties. MIMO radars with sparse sensing are useful in networked radar scenarios, in which the joint processing of the measurements is done at a fusion center, which might be connected to the receive antennas via a wireless link. In such scenarios, reduced amount of data translates into bandwidth and power saving in the receiver-fusion center link. First, we consider previously defined CS-based MIMO radar schemes, and propose optimal transmit antenna ...
Sun, Shunqiao — Rutgers, The State University of New Jersey
Exploiting Sparsity for Efficient Compression and Analysis of ECG and Fetal-ECG Signals
Over the last decade there has been an increasing interest in solutions for the continuous monitoring of health status with wireless, and in particular, wearable devices that provide remote analysis of physiological data. The use of wireless technologies have introduced new problems such as the transmission of a huge amount of data within the constraint of limited battery life devices. The design of an accurate and energy efficient telemonitoring system can be achieved by reducing the amount of data that should be transmitted, which is still a challenging task on devices with both computational and energy constraints. Furthermore, it is not sufficient merely to collect and transmit data, and algorithms that provide real-time analysis are needed. In this thesis, we address the problems of compression and analysis of physiological data using the emerging frameworks of Compressive Sensing (CS) and sparse ...
Da Poian, Giulia — University of Udine
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...
Lagunas, Eva — Universitat Politecnica de Catalunya
Sequential Bayesian Modeling of non-stationary signals
are involved until the development of Sequential Monte Carlo techniques which are also known as the particle filters. In particle filtering, the problem is expressed in terms of state-space equations where the linearity and Gaussianity requirements of the Kalman filtering are generalized. Therefore, we need information about the functional form of the state variations. In this thesis, we bring a general solution for the cases where these variations are unknown and the process distributions cannot be expressed by any closed form probability density function. Here, we propose a novel modeling scheme which is as unified as possible to cover all these problems. Therefore we study the performance analysis of our unifying particle filtering methodology on non-stationary Alpha Stable process modeling. It is well known that the probability density functions of these processes cannot be expressed in closed form, except for ...
Gencaga, Deniz — Bogazici University
Compressed sensing approaches to large-scale tensor decompositions
Today’s society is characterized by an abundance of data that is generated at an unprecedented velocity. However, much of this data is immediately thrown away by compression or information extraction. In a compressed sensing (CS) setting the inherent sparsity in many datasets is exploited by avoiding the acquisition of superfluous data in the first place. We combine this technique with tensors, or multiway arrays of numerical values, which are higher-order generalizations of vectors and matrices. As the number of entries scales exponentially in the order, tensor problems are often large-scale. We show that the combination of simple, low-rank tensor decompositions with CS effectively alleviates or even breaks the so-called curse of dimensionality. After discussing the larger data fusion optimization framework for coupled and constrained tensor decompositions, we investigate three categories of CS type algorithms to deal with large-scale problems. First, ...
Vervliet, Nico — KU Leuven
Contributions to signal analysis and processing using compressed sensing techniques
Chapter 2 contains a short introduction to the fundamentals of compressed sensing theory, which is the larger context of this thesis. We start with introducing the key concepts of sparsity and sparse representations of signals. We discuss the central problem of compressed sensing, i.e. how to adequately recover sparse signals from a small number of measurements, as well as the multiple formulations of the reconstruction problem. A large part of the chapter is devoted to some of the most important conditions necessary and/or sufficient to guarantee accurate recovery. The aim is to introduce the reader to the basic results, without the burden of detailed proofs. In addition, we also present a few of the popular reconstruction and optimization algorithms that we use throughout the thesis. Chapter 3 presents an alternative sparsity model known as analysis sparsity, that offers similar recovery ...
Cleju, Nicolae — "Gheorghe Asachi" Technical University of Iasi
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.