Exact Unbiased Inverse of the Anscombe Transformation and its Poisson-Gaussian Generalization

Digital image acquisition is an intricate process, which is subject to various errors. Some of these errors are signal-dependent, whereas others are signal-independent. In particular, photon emission and sensing are inherently random physical processes, which in turn substantially contribute to the randomness in the output of the imaging sensor. This signal-dependent noise can be approximated through a Poisson distribution. On the other hand, there are various signal-independent noise sources involved in the image capturing chain, arising from the physical properties and imperfections of the imaging hardware. The noise attributed to these sources is typically modelled collectively as additive white Gaussian noise. Hence, we have three common ways of modelling the noise present in a digital image: Gaussian, Poisson, or Poisson-Gaussian. Image denoising aims at removing or attenuating this noise from the captured image, in order to provide an estimate of ...

Mäkitalo, Markku — Tampere University of Technology


Sparsity Models for Signals: Theory and Applications

Many signal and image processing applications have benefited remarkably from the theory of sparse representations. In its classical form this theory models signal as having a sparse representation under a given dictionary -- this is referred to as the "Synthesis Model". In this work we focus on greedy methods for the problem of recovering a signal from a set of deteriorated linear measurements. We consider four different sparsity frameworks that extend the aforementioned synthesis model: (i) The cosparse analysis model; (ii) the signal space paradigm; (iii) the transform domain strategy; and (iv) the sparse Poisson noise model. Our algorithms of interest in the first part of the work are the greedy-like schemes: CoSaMP, subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). It has been shown for the synthesis model that these can achieve a stable recovery ...

Giryes, Raja — Technion


Filter Bank Techniques for the Physical Layer in Wireless Communications

Filter bank based multicarrier is an evolution with many advantages over the widespread OFDM multicarrier scheme. The author of the thesis stands behind this statement and proposes various solutions for practical physical layer problems based on filter bank processing of wireless communications signals. Filter banks are an evolved form of subband processing, harnessing the key advantages of original efficient subband processing based on the fast Fourier transforms and addressing some of its shortcomings, at the price of a somewhat increased implementation complexity. The main asset of the filter banks is the possibility to design very frequency selective subband filters to compartmentalize the overall spectrum into well isolated subbands, while still making very efficient use of the assigned bandwidth. This thesis first exploits this main feature of the filter banks in the subband system configuration, in which the analysis filter bank ...

Hidalgo Stitz, Tobias — Tampere University of Technology


General Approaches for Solving Inverse Problems with Arbitrary Signal Models

Ill-posed inverse problems appear in many signal and image processing applications, such as deblurring, super-resolution and compressed sensing. The common approach to address them is to design a specific algorithm, or recently, a specific deep neural network, for each problem. Both signal processing and machine learning tactics have drawbacks: traditional reconstruction strategies exhibit limited performance for complex signals, such as natural images, due to the hardness of their mathematical modeling; while modern works that circumvent signal modeling by training deep convolutional neural networks (CNNs) suffer from a huge performance drop when the observation model used in training is inexact. In this work, we develop and analyze reconstruction algorithms that are not restricted to a specific signal model and are able to handle different observation models. Our main contributions include: (a) We generalize the popular sparsity-based CoSaMP algorithm to any signal ...

Tirer, Tom — Tel Aviv University


Design and applications of Filterbank structures implementing Reed-Solomon codes

In nowadays communication systems, error correction provides robust data transmission through imperfect (noisy) channels. Error correcting codes are a crucial component in most storage and communication systems – wired or wireless –, e.g. GSM, UMTS, xDSL, CD/DVD. At least as important as the data integrity issue is the recent realization that error correcting codes fundamentally change the trade-offs in system design. High-integrity, low redundancy coding can be applied to increase data rate, or battery life time or by reducing hardware costs, making it possible to enter mass market. When it comes to the design of error correcting codes and their properties, there are two main theories that play an important role in this work. Classical coding theory aims at finding the best code given an available block length. This thesis focuses on the ubiquitous Reed-Solomon codes, one of the major ...

Van Meerbergen, Geert — Katholieke Universiteit Leuven


Cosparse regularization of physics-driven inverse problems

Inverse problems related to physical processes are of great importance in practically every field related to signal processing, such as tomography, acoustics, wireless communications, medical and radar imaging, to name only a few. At the same time, many of these problems are quite challenging due to their ill-posed nature. On the other hand, signals originating from physical phenomena are often governed by laws expressible through linear Partial Differential Equations (PDE), or equivalently, integral equations and the associated Green’s functions. In addition, these phenomena are usually induced by sparse singularities, appearing as sources or sinks of a vector field. In this thesis we primarily investigate the coupling of such physical laws with a prior assumption on the sparse origin of a physical process. This gives rise to a “dual” regularization concept, formulated either as sparse analysis (cosparse), yielded by a PDE ...

Kitić, Srđan — Université de Rennes 1


Efficient representation, generation and compression of digital holograms

Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...

Blinder, David — Vrije Universiteit Brussel


Robust Network Topology Inference and Processing of Graph Signals

The abundance of large and heterogeneous systems is rendering contemporary data more pervasive, intricate, and with a non-regular structure. With classical techniques facing troubles to deal with the irregular (non-Euclidean) domain where the signals are defined, a popular approach at the heart of graph signal processing (GSP) is to: (i) represent the underlying support via a graph and (ii) exploit the topology of this graph to process the signals at hand. In addition to the irregular structure of the signals, another critical limitation is that the observed data is prone to the presence of perturbations, which, in the context of GSP, may affect not only the observed signals but also the topology of the supporting graph. Ignoring the presence of perturbations, along with the couplings between the errors in the signal and the errors in their support, can drastically hinder ...

Rey, Samuel — King Juan Carlos University


Analiza Metod Detekcji Dyfrakcyjnych Linii Kikuchiego

The goal of the dissertation is to investigate and propose new methods for automatic Kikuchi lines detection. New subdivision of microscopic investigation called Orientation Microscopy is already well known in scanning electron microscope (SEM). Spatial resolution in SEM causes the limitation for investigation of fine grained and highly deformed materials. Needs for investigation in nanoscale contribute to development of an appropriate method for transmission electron microscope (TEM). Automated acquisition and indexing Kikuchi diffraction pattern, necessary for creation of orientation maps in TEM, cause more difficulties than in SEM. In order to solve the problem, the author developed and tested three methods for automatic Kikuchi lines detection. The first method is based on directional image filtration and scanning the entire image with a specially designed mask. This method yields good results but is relatively slow. The second method make use of ...

Fraczek, Rafal — AGH - University of Science and Technology


Array Signal Processing Algorithms for Beamforming and Direction Finding

Array processing is an area of study devoted to processing the signals received from an antenna array and extracting information of interest. It has played an important role in widespread applications like radar, sonar, and wireless communications. Numerous adaptive array processing algorithms have been reported in the literature in the last several decades. These algorithms, in a general view, exhibit a trade-off between performance and required computational complexity. In this thesis, we focus on the development of array processing algorithms in the application of beamforming and direction of arrival (DOA) estimation. In the beamformer design, we employ the constrained minimum variance (CMV) and the constrained constant modulus (CCM) criteria to propose full-rank and reduced-rank adaptive algorithms. Specifically, for the full-rank algorithms, we present two low-complexity adaptive step size mechanisms with the CCM criterion for the step size adaptation of the ...

Lei Wang — University of York


Performance Enhancement for Filter Bank Multicarrier Methods in Multi-Antenna Wireless Communication Systems

This thesis investigates filter bank based multicarrier modulation using offset quadrature amplitude modulation (FBMC/OQAM), which is characterised by a critically sampled FBMC system that achieves full spectral efficiency in the sense of being free of redundancy. As a starting point, a performance comparison between FBMC/OQAM and oversampled (OS) FBMC systems is made in terms of per-subband fractionally spaced equalisation in order to compensate for the transmission distortions caused by dispersive channels. Simulation results show the reduced performance in equalising FBMC/OQAM compared to OS-FBMC, where the advantage for the latter stems from the use of guard bands. Alternatively, the inferior performance of FBMC/OQAM can be assigned to the inability of a per-subband equaliser to address the problem of potential intercarrier interference (ICI) in this system. The FBMC/OQAM system is analysed by representing the equivalent transmultiplexed channel including the filter banks as ...

Nagy, Amr — University of Strathclyde


Distributed Spatial Filtering in Wireless Sensor Networks

Wireless sensor networks (WSNs) paved the way for accessing data previously unavailable by deploying sensors in various locations in space, each collecting local measurements of a target source signal. By exploiting the information resulting from the multitude of signals measured at the different sensors of the network, various tasks can be achieved, such as denoising or dimensionality reduction which can in turn be used, e.g., for source localization or detecting seizures from electroencephalography measurements. Spatial filtering consists of linearly combining the signals measured at each sensor of the network such that the resulting filtered signal is optimal in some sense. This technique is widely used in biomedical signal processing, wireless communication, and acoustics, among other fields. In spatial filtering tasks, the aim is to exploit the correlation between the signals of all sensors in the network, therefore requiring access to ...

Musluoglu, Cem Ates — KU Leuven


Distributed Signal Processing for Binaural Hearing Aids

Over the last centuries, hearing aids have evolved from crude and bulky horn-shaped instruments to lightweight and almost invisible digital signal processing devices. While most of the research has focused on the design of monaural apparatus, the use of a wireless link has been recently advocated to enable data transfer between hearing aids such as to obtain a binaural system. The availability of a wireless link offers brand new perspectives but also poses great technical challenges. It requires the design of novel signal processing schemes that address the restricted communication bitrates, processing delays and power consumption limitations imposed by wireless hearing aids. The goal of this dissertation is to address these issues at both a theoretical and a practical level. We start by taking a distributed source coding view on the problem of binaural noise reduction. The proposed analysis allows ...

Roy, Olivier — EPFL


Adaptive Noise Cancelation in Speech Signals

Today, adaptive algorithms represent one of the most frequently used computational tools for the processing of digital speech signals. This work investigates and analyzes the properties of adaptive algorithms in speech communication applications where rigorous conditions apply, such as noise and echo cancelation. Like other theses in this field do, it tries to tackle the ever-lasting problem of computational complexity vs. rate of convergence. It introduces some new adaptive methods that stem from the existing algorithms as well as a novel concept which has been entitled Optimal Step-Size (OSS). In the first part of the thesis we investigate some well-known, widely used adaptive techniques such as the Normalized Least Mean Squares (NLMS) and the Recursive Least Mean Squares (RLS). In spite of the fact that the NLMS and the RLS belong to the "simplest" principles, as far as complexity is ...

Malenovsky, Vladimir — Department of Telecommunications, Brno University of Technology, Czech Republic


Advances in graph signal processing: Graph filtering and network identification

To the surprise of most of us, complexity in nature spawns from simplicity. No matter how simple a basic unit is, when many of them work together, the interactions among these units lead to complexity. This complexity is present in the spreading of diseases, where slightly different policies, or conditions,might lead to very different results; or in biological systems where the interactions between elements maintain the delicate balance that keep life running. Fortunately, despite their complexity, current advances in technology have allowed us to have more than just a sneak-peak at these systems. With new views on how to observe such systems and gather data, we aimto understand the complexity within. One of these new views comes from the field of graph signal processing which provides models and tools to understand and process data coming from such complex systems. With ...

Coutino, Mario — Delft University of Technology

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.