Linear Dynamical Systems with Sparsity Constraints: Theory and Algorithms

This thesis develops new mathematical theory and presents novel recovery algorithms for discrete linear dynamical systems (LDS) with sparsity constraints on either control inputs or initial state. The recovery problems in this framework manifest as the problem of reconstructing one or more sparse signals from a set of noisy underdetermined linear measurements. The goal of our work is to design algorithms for sparse signal recovery which can exploit the underlying structure in the measurement matrix and the unknown sparse vectors, and to analyze the impact of these structures on the efficacy of the recovery. We answer three fundamental and interconnected questions on sparse signal recovery problems that arise in the context of LDS. First, what are necessary and sufficient conditions for the existence of a sparse solution? Second, given that a sparse solution exists, what are good low-complexity algorithms that ...

Joseph, Geethu — Indian Institute of Science, Bangalore


Robust Network Topology Inference and Processing of Graph Signals

The abundance of large and heterogeneous systems is rendering contemporary data more pervasive, intricate, and with a non-regular structure. With classical techniques facing troubles to deal with the irregular (non-Euclidean) domain where the signals are defined, a popular approach at the heart of graph signal processing (GSP) is to: (i) represent the underlying support via a graph and (ii) exploit the topology of this graph to process the signals at hand. In addition to the irregular structure of the signals, another critical limitation is that the observed data is prone to the presence of perturbations, which, in the context of GSP, may affect not only the observed signals but also the topology of the supporting graph. Ignoring the presence of perturbations, along with the couplings between the errors in the signal and the errors in their support, can drastically hinder ...

Rey, Samuel — King Juan Carlos University


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Matrix Designs and Methods for Secure and Efficient Compressed Sensing

The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique’s foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of natural signals with minimum complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption. In ...

Cambareri, Valerio — University of Bologna


Contributions to signal analysis and processing using compressed sensing techniques

Chapter 2 contains a short introduction to the fundamentals of compressed sensing theory, which is the larger context of this thesis. We start with introducing the key concepts of sparsity and sparse representations of signals. We discuss the central problem of compressed sensing, i.e. how to adequately recover sparse signals from a small number of measurements, as well as the multiple formulations of the reconstruction problem. A large part of the chapter is devoted to some of the most important conditions necessary and/or sufficient to guarantee accurate recovery. The aim is to introduce the reader to the basic results, without the burden of detailed proofs. In addition, we also present a few of the popular reconstruction and optimization algorithms that we use throughout the thesis. Chapter 3 presents an alternative sparsity model known as analysis sparsity, that offers similar recovery ...

Cleju, Nicolae — "Gheorghe Asachi" Technical University of Iasi


MIMO Radars with Sparse Sensing

Multi-input and multi-output (MIMO) radars achieve high resolution of arrival direction by transmitting orthogonal waveforms, performing matched filtering at the receiver end and then jointly processing the measurements of all receive antennas. This dissertation studies the use of compressive sensing (CS) and matrix completion (MC) techniques as means of reducing the amount of data that need to be collected by a MIMO radar system, without sacrificing the system’s good resolution properties. MIMO radars with sparse sensing are useful in networked radar scenarios, in which the joint processing of the measurements is done at a fusion center, which might be connected to the receive antennas via a wireless link. In such scenarios, reduced amount of data translates into bandwidth and power saving in the receiver-fusion center link. First, we consider previously defined CS-based MIMO radar schemes, and propose optimal transmit antenna ...

Sun, Shunqiao — Rutgers, The State University of New Jersey


Compressive Sensing Based Candidate Detector and its Applications to Spectrum Sensing and Through-the-Wall Radar Imaging

Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...

Lagunas, Eva — Universitat Politecnica de Catalunya


Robust Methods for Sensing and Reconstructing Sparse Signals

Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...

Carrillo, Rafael — University of Delaware


Compressed sensing and dimensionality reduction for unsupervised learning

This work aims at exploiting compressive sensing paradigms in order to reduce the cost of statistical learning tasks. We first provide a reminder of compressive sensing bases and describe some statistical analysis tasks using similar ideas. Then we describe a framework to perform parameter estimation on probabilistic mixture models in a case where training data is compressed to a fixed-size representation called a sketch. We formulate the estimation as a generalized inverse problem for which we propose a greedy algorithm. We experiment this framework and algorithm on an isotropic Gaussian mixture model. This proof of concept suggests the existence of theoretical recovery guarantees for sparse objects beyond the usual vector and matrix cases. We therefore study the generalization of linear inverse problems stability results on general signal models encompassing the standard cases and the sparse mixtures of probability distributions. We ...

Bourrier, Anthony — INRIA, Technicolor


Sketching for Large-Scale Learning of Mixture Models

Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond ...

Keriven, Nicolas — IRISA, Rennes, France


Exploiting Sparsity for Efficient Compression and Analysis of ECG and Fetal-ECG Signals

Over the last decade there has been an increasing interest in solutions for the continuous monitoring of health status with wireless, and in particular, wearable devices that provide remote analysis of physiological data. The use of wireless technologies have introduced new problems such as the transmission of a huge amount of data within the constraint of limited battery life devices. The design of an accurate and energy efficient telemonitoring system can be achieved by reducing the amount of data that should be transmitted, which is still a challenging task on devices with both computational and energy constraints. Furthermore, it is not sufficient merely to collect and transmit data, and algorithms that provide real-time analysis are needed. In this thesis, we address the problems of compression and analysis of physiological data using the emerging frameworks of Compressive Sensing (CS) and sparse ...

Da Poian, Giulia — University of Udine


Compressed sensing approaches to large-scale tensor decompositions

Today’s society is characterized by an abundance of data that is generated at an unprecedented velocity. However, much of this data is immediately thrown away by compression or information extraction. In a compressed sensing (CS) setting the inherent sparsity in many datasets is exploited by avoiding the acquisition of superfluous data in the first place. We combine this technique with tensors, or multiway arrays of numerical values, which are higher-order generalizations of vectors and matrices. As the number of entries scales exponentially in the order, tensor problems are often large-scale. We show that the combination of simple, low-rank tensor decompositions with CS effectively alleviates or even breaks the so-called curse of dimensionality. After discussing the larger data fusion optimization framework for coupled and constrained tensor decompositions, we investigate three categories of CS type algorithms to deal with large-scale problems. First, ...

Vervliet, Nico — KU Leuven


Sparse Signal Recovery From Incomplete And Perturbed Data

Sparse signal recovery consists of algorithms that are able to recover undersampled high dimensional signals accurately. These algorithms require fewer measurements than traditional Shannon/Nyquist sampling theorem demands. Sparse signal recovery has found many applications including magnetic resonance imaging, electromagnetic inverse scattering, radar/sonar imaging, seismic data collection, sensor array processing and channel estimation. The focus of this thesis is on electromagentic inverse scattering problem and joint estimation of the frequency offset and the channel impulse response in OFDM. In the electromagnetic inverse scattering problem, the aim is to find the electromagnetic properties of unknown targets from measured scattered field. The reconstruction of closely placed point-like objects is investigated. The application of the greedy pursuit based sparse recovery methods, OMP and FTB-OMP, is proposed for increasing the reconstruction resolution. The performances of the proposed methods are compared against NESTA and MT-BCS methods. ...

Senyuva, Rifat Volkan — Bogazici University


Sparse Sensing for Statistical Inference: Theory, Algorithms, and Applications

In today's society, we are flooded with massive volumes of data in the order of a billion gigabytes on a daily basis from pervasive sensors. It is becoming increasingly challenging to locally store and transport the acquired data to a central location for signal/data processing (i.e., for inference). To alleviate these problems, it is evident that there is an urgent need to significantly reduce the sensing cost (i.e., the number of expensive sensors) as well as the related memory and bandwidth requirements by developing unconventional sensing mechanisms to extract as much information as possible yet collecting fewer data. The first aim of this thesis is to develop theory and algorithms for data reduction. We develop a data reduction tool called sparse sensing, which consists of a deterministic and structured sensing function (guided by a sparse vector) that is optimally designed ...

Chepuri, Sundeep Prabhakar — Delft University of Technology


Distributed Stochastic Optimization in Non-Differentiable and Non-Convex Environments

The first part of this dissertation considers distributed learning problems over networked agents. The general objective of distributed adaptation and learning is the solution of global, stochastic optimization problems through localized interactions and without information about the statistical properties of the data. Regularization is a useful technique to encourage or enforce structural properties on the resulting solution, such as sparsity or constraints. A substantial number of regularizers are inherently non-smooth, while many cost functions are differentiable. We propose distributed and adaptive strategies that are able to minimize aggregate sums of objectives. In doing so, we exploit the structure of the individual objectives as sums of differentiable costs and non-differentiable regularizers. The resulting algorithms are adaptive in nature and able to continuously track drifts in the problem; their recursions, however, are subject to persistent perturbations arising from the stochastic nature of ...

Vlaski, Stefan — University of California, Los Angeles

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.