Robust Methods for Sensing and Reconstructing Sparse Signals

Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...

Carrillo, Rafael — University of Delaware


Simulation Methods for Linear and Nonlinear Time Series Models with Application to Distorted Audio Signals

This dissertation is concerned with the development of Markov chain Monte Carlo (MCMC) methods for the Bayesian restoration of degraded audio signals. First, the Bayesian approach to time series modelling is reviewed, then established MCMC methods are introduced. The first problem to be addressed is that of model order uncertainty. A reversible-jump sampler is proposed which can move between models of different order. It is shown that faster convergence can be achieved by exploiting the analytic structure of the time series model. This approach to model order uncertainty is applied to the problem of noise reduction using the simulation smoother. The effects of incorrect autoregressive (AR) model orders are demonstrated, and a mixed model order MCMC noise reduction scheme is developed. Nonlinear time series models are surveyed, and the advantages of linear-in- the-parameters models explained. A nonlinear AR (NAR) model, ...

Troughton, Paul Thomas — University of Cambridge


Ultra Wideband Communications: from Analog to Digital

The aim of this thesis is to investigate key issues encountered in the design of transmission schemes and receiving techniques for Ultra Wideband (UWB) communication systems. Based on different data rate applications, this work is divided into two parts, where energy efficient and robust physical layer solutions are proposed, respectively. Due to a huge bandwidth of UWB signals, a considerable amount of multipath arrivals with various path gains is resolvable at the receiver. For low data rate impulse radio UWB systems, suboptimal non-coherent detection is a simple way to effectively capture the multipath energy. Feasible techniques that increase the power efficiency and the interference robustness of non-coherent detection need to be investigated. For high data rate direct sequence UWB systems, a large number of multipath arrivals results in severe inter-/intra-symbol interference. Additionally, the system performance may also be deteriorated by ...

Song, Nuan — Ilmenau University of Technology


Robust Estimation and Model Order Selection for Signal Processing

In this thesis, advanced robust estimation methodologies for signal processing are developed and analyzed. The developed methodologies solve problems concerning multi-sensor data, robust model selection as well as robustness for dependent data. The work has been applied to solve practical signal processing problems in different areas of biomedical and array signal processing. In particular, for univariate independent data, a robust criterion is presented to select the model order with an application to corneal-height data modeling. The proposed criterion overcomes some limitations of existing robust criteria. For real-world data, it selects the radial model order of the Zernike polynomial of the corneal topography map in accordance with clinical expectations, even if the measurement conditions for the videokeratoscopy, which is the state-of-the-art method to collect corneal-height data, are poor. For multi-sensor data, robust model order selection selection criteria are proposed and applied ...

Muma, Michael — Technische Universität Darmstadt


Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing

Modern society is undergoing a fundamental change in the way we interact with technology. More and more devices are becoming "smart" by gaining advanced computation capabilities and communication interfaces, from household appliances over transportation systems to large-scale networks like the power grid. Recording, processing, and exchanging digital information is thus becoming increasingly important. As a growing share of devices is nowadays mobile and hence battery-powered, a particular interest in efficient digital signal processing techniques emerges. This thesis contributes to this goal by demonstrating methods for finding efficient algebraic solutions to various applications of multi-channel digital signal processing. These may not always result in the best possible system performance. However, they often come close while being significantly simpler to describe and to implement. The simpler description facilitates a thorough analysis of their performance which is crucial to design robust and reliable ...

Roemer, Florian — Ilmenau University of Technology


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Content-based search and browsing in semantic multimedia retrieval

Growth in storage capacity has led to large digital video repositories and complicated the discovery of specific information without the laborious manual annotation of data. The research focuses on creating a retrieval system that is ultimately independent of manual work. To retrieve relevant content, the semantic gap between the searcher's information need and the content data has to be overcome using content-based technology. Semantic gap constitutes of two distinct elements: the ambiguity of the true information need and the equivocalness of digital video data. The research problem of this thesis is: what computational content-based models for retrieval increase the effectiveness of the semantic retrieval of digital video? The hypothesis is that semantic search performance can be improved using pattern recognition, data abstraction and clustering techniques jointly with human interaction through manually created queries and visual browsing. The results of this ...

Rautiainen, Mika — University of Oulou


Image Sequence Restoration Using Gibbs Distributions

This thesis addresses a number of issues concerned with the restoration of one type of image sequence namely archived black and white motion pictures. These are often a valuable historical record but because of the physical nature of the film they can suffer from a variety of degradations which reduce their usefulness. The main visual defects are ‘dirt and sparkle’ due to dust and dirt becoming attached to the film or abrasion removing the emulsion and ‘line scratches’ due to the film running against foreign bodies in the camera or projector. For an image restoration algorithm to be successful it must be based on a mathematical model of the image. A number of models have been proposed and here we explore the use of a general class of model known as Markov Random Fields (MRFs) based on Gibbs distributions by ...

Morris, Robin David — University of Cambridge


A statistical approach to motion estimation

Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...

Moschetti, Fulvio — Swiss Federal Institute of Technology


Speech recognition in noisy conditions using missing feature approach

The research in this thesis addresses the problem of automatic speech recognition in noisy environments. Automatic speech recognition systems obtain acceptable performances in noise free conditions but these performances degrade dramatically in presence of additive noise. This is mainly due to the mismatch between the training and the noisy operating conditions. In the time-frequency representation of the noisy speech signal, some of the clean speech features are masked by noise. In this case the clean speech features cannot be correctly estimated from the noisy speech and therefore they are considered as missing or unreliable. In order to improve the performance of speech recognition systems in additive noise conditions, special attention should be paid to the problems of detection and compensation of these unreliable features. This thesis is concerned with the problem of missing features applied to automatic speaker-independent speech recognition. ...

Renevey, Philippe — Swiss Federal Institute of Technology


Video Content Analysis by Active Learning

Advances in compression techniques, decreasing cost of storage, and high-speed transmission have facilitated the way videos are created, stored and distributed. As a consequence, videos are now being used in many applications areas. The increase in the amount of video data deployed and used in today's applications reveals not only the importance as multimedia data type, but also led to the requirement of efficient management of video data. This management paved the way for new research areas, such as indexing and retrieval of video with respect to their spatio-temporal, visual and semantic contents. This thesis presents work towards a unified framework for semi-automated video indexing and interactive retrieval. To create an efficient index, a set of representative key frames are selected which capture and encapsulate the entire video content. This is achieved by, firstly, segmenting the video into its constituent ...

Camara Chavez, Guillermo — Federal University of Minas Gerais


Synthetic test patterns and compression artefact distortion metrics for image codecs

This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...

Punchihewa, Amal — Massey University, New Zealand


Signal and Image Processing Techniques for Image-Based Photometry With Application to Diabetes Care

This PhD thesis addresses the problem of measuring blood glucose from a photometric measurement setup that requires blood samples in the nano litre-range, which is several orders of magnitude less than the state of the art. The chemical reaction between the blood sample and the reagent in this setup is observed by a camera over time. Notably, the presented framework can be generalised to any image-based photometric measurement scheme in the context of modern biosensors. In this thesis a framework is developed to measure the glucose concentration from the raw images obtained by the camera. Initially, a pre-processing scheme is presented to enhance the raw images. Moreover, a reaction onset detection algorithm is developed. This eliminates unnecessary computation during the constant phase of the chemical reaction. To detect faulty glucose measurements, methods of texture analysis are identified and employed in ...

Demitri, Nevine — Technische Universität Darmstadt


Sparsity in Linear Predictive Coding of Speech

This thesis deals with developing improved modeling methods for speech and audio processing based on the recent developments in sparse signal representation. In particular, this work is motivated by the need to address some of the limitations of the well-known linear prediction (LP) based all-pole models currently applied in many modern speech and audio processing systems. In the first part of this thesis, we introduce \emph{Sparse Linear Prediction}, a set of speech processing tools created by introducing sparsity constraints into the LP framework. This approach defines predictors that look for a sparse residual rather than a minimum variance one, with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter is model as an impulse train. Introducing sparsity in the LP framework, will also bring to develop the ...

Giacobello, Daniele — Aalborg University

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.