Compressed sensing approaches to large-scale tensor decompositions (2018)
Explicit and implicit tensor decomposition-based algorithms and applications
Various real-life data such as time series and multi-sensor recordings can be represented by vectors and matrices, which are one-way and two-way arrays of numerical values, respectively. Valuable information can be extracted from these measured data matrices by means of matrix factorizations in a broad range of applications within signal processing, data mining, and machine learning. While matrix-based methods are powerful and well-known tools for various applications, they are limited to single-mode variations, making them ill-suited to tackle multi-way data without loss of information. Higher-order tensors are a natural extension of vectors (first order) and matrices (second order), enabling us to represent multi-way arrays of numerical values, which have become ubiquitous in signal processing and data mining applications. By leveraging the powerful utitilies offered by tensor decompositions such as compression and uniqueness properties, we can extract more information from multi-way ...
Boussé, Martijn — KU Leuven
Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing
Modern society is undergoing a fundamental change in the way we interact with technology. More and more devices are becoming "smart" by gaining advanced computation capabilities and communication interfaces, from household appliances over transportation systems to large-scale networks like the power grid. Recording, processing, and exchanging digital information is thus becoming increasingly important. As a growing share of devices is nowadays mobile and hence battery-powered, a particular interest in efficient digital signal processing techniques emerges. This thesis contributes to this goal by demonstrating methods for finding efficient algebraic solutions to various applications of multi-channel digital signal processing. These may not always result in the best possible system performance. However, they often come close while being significantly simpler to describe and to implement. The simpler description facilitates a thorough analysis of their performance which is crucial to design robust and reliable ...
Roemer, Florian — Ilmenau University of Technology
Bayesian Compressed Sensing using Alpha-Stable Distributions
During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...
Tzagkarakis, George — University of Crete
Tensor Decompositions and Algorithms for Efficient Multidimensional Signal Processing
Due to the extensive growth of big data applications, the widespread use of multisensor technologies, and the need for efficient data representations, multidimensional techniques are a primary tool for many signal processing applications. Multidimensional arrays or tensors allow a natural representation of high-dimensional data. Therefore, they are particularly suited for tasks involving multi-modal data sources such as biomedical sensor readings or multiple-input and multiple-output (MIMO) antenna arrays. While tensor-based techniques were still in their infancy several decades ago, nowadays, they have already proven their effectiveness in various applications. There are many different tensor decompositions in the literature, and each finds use in diverse signal processing fields. In this thesis, we focus on two tensor factorization models: the rank-(Lr,Lr,1) Block-Term Decomposition (BTD) and the Multilinear Generalized Singular Value Decomposition (ML-GSVD) that we propose in this thesis. The ML-GSVD is an extension ...
Khamidullina, Liana — Technische Universität Ilmenau
Robust Methods for Sensing and Reconstructing Sparse Signals
Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...
Carrillo, Rafael — University of Delaware
Functional Neuroimaging Data Characterisation Via Tensor Representations
The growing interest in neuroimaging technologies generates a massive amount of biomedical data that exhibit high dimensionality. Tensor-based analysis of brain imaging data has by now been recognized as an effective approach exploiting its inherent multi-way nature. In particular, the advantages of tensorial over matrix-based methods have previously been demonstrated in the context of functional magnetic resonance imaging (fMRI) source localization; the identification of the regions of the brain which are activated at specific time instances. However, such methods can also become ineffective in realistic challenging scenarios, involving, e.g., strong noise and/or significant overlap among the activated regions. Moreover, they commonly rely on the assumption of an underlying multilinear model generating the data. In the first part of this thesis, we aimed at investigating the possible gains from exploiting the 3-dimensional nature of the brain images, through a higher-order tensorization ...
Christos Chatzichristos — National and Kapodistrian University of Athens
Compressed Sensing: Novel Applications, Challenges, and Techniques
Compressed Sensing (CS) is a widely used technique for efficient signal acquisition, in which a very small number of (possibly noisy) linear measurements of an unknown signal vector are taken via multiplication with a designed ‘sensing matrix’ in an application-specific manner, and later recovered by exploiting the sparsity of the signal vector in some known orthonormal basis and some special properties of the sensing matrix which allow for such recovery. We study three new applications of CS, each of which poses a unique challenge in a different aspect of it, and propose novel techniques to solve them, advancing the field of CS. Each application involves a unique combination of realistic assumptions on the measurement noise model and the signal, and a unique set of algorithmic challenges. We frame Pooled RT-PCR Testing for COVID-19 – wherein RT-PCR (Reverse Transcription Polymerase Chain ...
Ghosh, Sabyasachi — Department of Computer Science and Engineering, Indian Institute of Technology Bombay
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond ...
Keriven, Nicolas — IRISA, Rennes, France
Exploiting Sparsity for Efficient Compression and Analysis of ECG and Fetal-ECG Signals
Over the last decade there has been an increasing interest in solutions for the continuous monitoring of health status with wireless, and in particular, wearable devices that provide remote analysis of physiological data. The use of wireless technologies have introduced new problems such as the transmission of a huge amount of data within the constraint of limited battery life devices. The design of an accurate and energy efficient telemonitoring system can be achieved by reducing the amount of data that should be transmitted, which is still a challenging task on devices with both computational and energy constraints. Furthermore, it is not sufficient merely to collect and transmit data, and algorithms that provide real-time analysis are needed. In this thesis, we address the problems of compression and analysis of physiological data using the emerging frameworks of Compressive Sensing (CS) and sparse ...
Da Poian, Giulia — University of Udine
Subspace-based exponential data fitting using linear and multilinear algebra
The exponentially damped sinusoidal (EDS) model arises in numerous signal processing applications. It is therefore of great interest to have methods able to estimate the parameters of such a model in the single-channel as well as in the multi-channel case. Because such a model naturally lends itself to subspace representation, powerful matrix approaches like HTLS in the single-channel case, HTLSstack in the multi-channel case and HTLSDstack in the decimative case have been developed to estimate the parameters of the underlying EDS model. They basically consist in stacking the signal in Hankel (single-channel) or block Hankel (multi- channel) data matrices. Then, the signal subspace is estimated by means of the singular value decomposition (SVD). The parameters of the model, namely the amplitudes, the phases, the damping factors, and the frequencies, are estimated from this subspace. Note that the sample covariance matrix ...
Papy, Jean-Michel — Katholieke Universiteit Leuven
This dissertation develops false discovery rate (FDR) controlling machine learning algorithms for large-scale high-dimensional data. Ensuring the reproducibility of discoveries based on high-dimensional data is pivotal in numerous applications. The developed algorithms perform fast variable selection tasks in large-scale high-dimensional settings where the number of variables may be much larger than the number of samples. This includes large-scale data with up to millions of variables such as genome-wide association studies (GWAS). Theoretical finite sample FDR-control guarantees based on martingale theory have been established proving the trustworthiness of the developed methods. The practical open-source R software packages TRexSelector and tlars, which implement the proposed algorithms, have been published on the Comprehensive R Archive Network (CRAN). Extensive numerical experiments and real-world problems in biomedical and financial engineering demonstrate the performance in challenging use-cases. The first three main parts of this dissertation present ...
Machkour, Jasin — Technische Universität Darmstadt
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...
Lagunas, Eva — Universitat Politecnica de Catalunya
Parameter Estimation and Filtering Using Sparse Modeling
Sparsity-based estimation techniques deal with the problem of retrieving a data vector from an undercomplete set of linear observations, when the data vector is known to have few nonzero elements with unknown positions. It is also known as the atomic decomposition problem, and has been carefully studied in the field of compressed sensing. Recent findings have led to a method called basis pursuit, also known as Least Absolute Shrinkage and Selection Operator (LASSO), as a numerically reliable sparsity-based approach. Although the atomic decomposition problem is generally NP-hard, it has been shown that basis pursuit may provide exact solutions under certain assumptions. This has led to an extensive study of signals with sparse representation in different domains, providing a new general insight into signal processing. This thesis further investigates the role of sparsity-based techniques, especially basis pursuit, for solving parameter estimation ...
Panahi, Ashkan — Chalmers University of Technology
Nonnegative Matrix and Tensor Factorizations: Models, Algorithms and Applications
In many fields, such as linear algebra, computational geometry, combinatorial optimization, analytical chemistry and geoscience, nonnegativity of the solution is required, which is either due to the fact that the data is physically nonnegative, or that the mathematical modeling of the problem requires nonnegativity. Image and audio processing are two examples for which the data are physically nonnegative. Probability and graph theory are examples for which the mathematical modeling requires nonnegativity. This thesis is about the nonnegative factorization of matrices and tensors: namely nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF). NMF problems arise in a wide range of scenarios such as the aforementioned fields, and NTF problems arise as a generalization of NMF. As the title suggests, the contributions of this thesis are centered on NMF and NTF over three aspects: modeling, algorithms and applications. On the modeling ...
Ang, Man Shun — Université de Mons
Advanced Signal Processing Concepts for Multi-Dimensional Communication Systems
The widespread use of mobile internet and smart applications has led to an explosive growth in mobile data traffic. With the rise of smart homes, smart buildings, and smart cities, this demand is ever growing since future communication systems will require the integration of multiple networks serving diverse sectors, domains and applications, such as multimedia, virtual or augmented reality, machine-to-machine (M2M) communication / the Internet of things (IoT), automotive applications, and many more. Therefore, in the future, the communication systems will not only be required to provide Gbps wireless connectivity but also fulfill other requirements such as low latency and massive machine type connectivity while ensuring the quality of service. Without significant technological advances to increase the system capacity, the existing telecommunications infrastructure will be unable to support these multi-dimensional requirements. This poses an important demand for suitable waveforms with ...
Cheema, Sher Ali — Technische Universität Ilmenau
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.