## Compressed sensing approaches to large-scale tensor decompositions (2018)

Explicit and implicit tensor decomposition-based algorithms and applications

Various real-life data such as time series and multi-sensor recordings can be represented by vectors and matrices, which are one-way and two-way arrays of numerical values, respectively. Valuable information can be extracted from these measured data matrices by means of matrix factorizations in a broad range of applications within signal processing, data mining, and machine learning. While matrix-based methods are powerful and well-known tools for various applications, they are limited to single-mode variations, making them ill-suited to tackle multi-way data without loss of information. Higher-order tensors are a natural extension of vectors (first order) and matrices (second order), enabling us to represent multi-way arrays of numerical values, which have become ubiquitous in signal processing and data mining applications. By leveraging the powerful utitilies offered by tensor decompositions such as compression and uniqueness properties, we can extract more information from multi-way ...

Boussé, Martijn — KU Leuven

Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing

Modern society is undergoing a fundamental change in the way we interact with technology. More and more devices are becoming "smart" by gaining advanced computation capabilities and communication interfaces, from household appliances over transportation systems to large-scale networks like the power grid. Recording, processing, and exchanging digital information is thus becoming increasingly important. As a growing share of devices is nowadays mobile and hence battery-powered, a particular interest in efficient digital signal processing techniques emerges. This thesis contributes to this goal by demonstrating methods for finding efficient algebraic solutions to various applications of multi-channel digital signal processing. These may not always result in the best possible system performance. However, they often come close while being significantly simpler to describe and to implement. The simpler description facilitates a thorough analysis of their performance which is crucial to design robust and reliable ...

Roemer, Florian — Ilmenau University of Technology

Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete

Robust Methods for Sensing and Reconstructing Sparse Signals

Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...

Carrillo, Rafael — University of Delaware

Sketching for Large-Scale Learning of Mixture Models

Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond ...

Keriven, Nicolas — IRISA, Rennes, France

Subspace-based exponential data fitting using linear and multilinear algebra

The exponentially damped sinusoidal (EDS) model arises in numerous signal processing applications. It is therefore of great interest to have methods able to estimate the parameters of such a model in the single-channel as well as in the multi-channel case. Because such a model naturally lends itself to subspace representation, powerful matrix approaches like HTLS in the single-channel case, HTLSstack in the multi-channel case and HTLSDstack in the decimative case have been developed to estimate the parameters of the underlying EDS model. They basically consist in stacking the signal in Hankel (single-channel) or block Hankel (multi- channel) data matrices. Then, the signal subspace is estimated by means of the singular value decomposition (SVD). The parameters of the model, namely the amplitudes, the phases, the damping factors, and the frequencies, are estimated from this subspace. Note that the sample covariance matrix ...

Papy, Jean-Michel — Katholieke Universiteit Leuven

Exploiting Sparsity for Efficient Compression and Analysis of ECG and Fetal-ECG Signals

Over the last decade there has been an increasing interest in solutions for the continuous monitoring of health status with wireless, and in particular, wearable devices that provide remote analysis of physiological data. The use of wireless technologies have introduced new problems such as the transmission of a huge amount of data within the constraint of limited battery life devices. The design of an accurate and energy efficient telemonitoring system can be achieved by reducing the amount of data that should be transmitted, which is still a challenging task on devices with both computational and energy constraints. Furthermore, it is not sufficient merely to collect and transmit data, and algorithms that provide real-time analysis are needed. In this thesis, we address the problems of compression and analysis of physiological data using the emerging frameworks of Compressive Sensing (CS) and sparse ...

Da Poian, Giulia — University of Udine

Parameter Estimation and Filtering Using Sparse Modeling

Sparsity-based estimation techniques deal with the problem of retrieving a data vector from an undercomplete set of linear observations, when the data vector is known to have few nonzero elements with unknown positions. It is also known as the atomic decomposition problem, and has been carefully studied in the field of compressed sensing. Recent findings have led to a method called basis pursuit, also known as Least Absolute Shrinkage and Selection Operator (LASSO), as a numerically reliable sparsity-based approach. Although the atomic decomposition problem is generally NP-hard, it has been shown that basis pursuit may provide exact solutions under certain assumptions. This has led to an extensive study of signals with sparse representation in different domains, providing a new general insight into signal processing. This thesis further investigates the role of sparsity-based techniques, especially basis pursuit, for solving parameter estimation ...

Panahi, Ashkan — Chalmers University of Technology

Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...

Lagunas, Eva — Universitat Politecnica de Catalunya

Advanced Signal Processing Concepts for Multi-Dimensional Communication Systems

The widespread use of mobile internet and smart applications has led to an explosive growth in mobile data traffic. With the rise of smart homes, smart buildings, and smart cities, this demand is ever growing since future communication systems will require the integration of multiple networks serving diverse sectors, domains and applications, such as multimedia, virtual or augmented reality, machine-to-machine (M2M) communication / the Internet of things (IoT), automotive applications, and many more. Therefore, in the future, the communication systems will not only be required to provide Gbps wireless connectivity but also fulfill other requirements such as low latency and massive machine type connectivity while ensuring the quality of service. Without significant technological advances to increase the system capacity, the existing telecommunications infrastructure will be unable to support these multi-dimensional requirements. This poses an important demand for suitable waveforms with ...

Cheema, Sher Ali — Technische Universität Ilmenau

MIMO Radars with Sparse Sensing

Multi-input and multi-output (MIMO) radars achieve high resolution of arrival direction by transmitting orthogonal waveforms, performing matched filtering at the receiver end and then jointly processing the measurements of all receive antennas. This dissertation studies the use of compressive sensing (CS) and matrix completion (MC) techniques as means of reducing the amount of data that need to be collected by a MIMO radar system, without sacrificing the system’s good resolution properties. MIMO radars with sparse sensing are useful in networked radar scenarios, in which the joint processing of the measurements is done at a fusion center, which might be connected to the receive antennas via a wireless link. In such scenarios, reduced amount of data translates into bandwidth and power saving in the receiver-fusion center link. First, we consider previously defined CS-based MIMO radar schemes, and propose optimal transmit antenna ...

Sun, Shunqiao — Rutgers, The State University of New Jersey

This thesis is concerned with three closely related problems. The first one is called Multiple-Input Multiple-Output (MIMO) Instantaneous Blind Identification, which we denote by MIBI. In this problem a number of mutually statistically independent source signals are mixed by a MIMO instantaneous mixing system and only the mixed signals are observed, i.e. both the mixing system and the original sources are unknown or ‘blind’. The goal of MIBI is to identify the MIMO system from the observed mixtures of the source signals only. The second problem is called Instantaneous Blind Signal Separation (IBSS) and deals with recovering mutually statistically independent source signals from their observed instantaneous mixtures only. The observation model and assumptions on the signals and mixing system are the same as those of MIBI. However, the main purpose of IBSS is the estimation of the source signals, whereas ...

van de Laar, Jakob — TU Eindhoven

MIMO instantaneous blind idenfitication and separation based on arbitrary order

This thesis is concerned with three closely related problems. The first one is called Multiple-Input Multiple-Output (MIMO) Instantaneous Blind Identification, which we denote by MIBI. In this problem a number of mutually statistically independent source signals are mixed by a MIMO instantaneous mixing system and only the mixed signals are observed, i.e. both the mixing system and the original sources are unknown or ¡blind¢. The goal of MIBI is to identify the MIMO system from the observed mixtures of the source signals only. The second problem is called Instantaneous Blind Signal Separation (IBSS) and deals with recovering mutually statistically independent source signals from their observed instantaneous mixtures only. The observation model and assumptions on the signals and mixing system are the same as those of MIBI. However, the main purpose of IBSS is the estimation of the source signals, whereas ...

van de Laar, Jakob — T.U. Eindhoven

Cost functions for acoustic filters estimations in reverberant mixtures

This work is focused on the processing of multichannel and multisource audio signals. From an audio mixture of several audio sources recorded in a reverberant room, we wish to es- timate the acoustic responses (a.k.a. mixing filters) between the sources and the microphones. To solve this inverse problem one need to take into account additional hypotheses on the nature of the acoustic responses. Our approach consists in first identifying mathematically the neces- sary hypotheses on the acoustic responses for their estimation and then building cost functions and algorithms to effectively estimate them. First, we considered the case where the source signals are known. We developed a method to estimate the acoustic responses based on a convex regularization which exploits both the temporal sparsity of the filters and the exponentially decaying envelope. Real-world experi- ments confirmed the effectiveness of this method ...

Benichoux, Alexis — Université Rennes I

Wireless Sensor Networks (WSNs) aim for accurate data gathering and representation of one or multiple physical variables from the environment, by means of sensor reading and wireless data packets transmission to a Data Fusion Center (DFC). There is no comprehensive common set of requirements for all WSN, as they are application dependent. Moreover, due to specific node capabilities or energy consumption constraints several tradeoffs have to be considered during the design, and particularly, the price of the sensor nodes is a determining factor. The distinction between small and large scale WSNs does not only refers to the quantity of sensor nodes, but also establishes the main design challenges in each case. For example, the node organization is a key issue in large scale WSNs, where many inexpensive nodes have to properly work in a coordinated manner. Regarding the amount of ...

Chidean, Mihaela I. — Rey Juan Carlos University

The current layout is optimized for **mobile
phones**. Page previews, thumbnails, and full abstracts
will remain hidden until the browser window grows in width.

The current layout is optimized for **tablet
devices**. Page previews and some thumbnails will remain
hidden until the browser window grows in width.