Digital Processing Based Solutions for Life Science Engineering Recognition Problems

The field of Life Science Engineering (LSE) is rapidly expanding and predicted to grow strongly in the next decades. It covers areas of food and medical research, plant and pests’ research, and environmental research. In each research area, engineers try to find equations that model a certain life science problem. Once found, they research different numerical techniques to solve for the unknown variables of these equations. Afterwards, solution improvement is examined by adopting more accurate conventional techniques, or developing novel algorithms. In particular, signal and image processing techniques are widely used to solve those LSE problems require pattern recognition. However, due to the continuous evolution of the life science problems and their natures, these solution techniques can not cover all aspects, and therefore demanding further enhancement and improvement. The thesis presents numerical algorithms of digital signal and image processing to ...

Hussein, Walid — Technische Universität München


On the Occurrence of Two-Wave with Diffuse Power Fading in MillimeterWave Communications

Mobile communications has become so successful today that conventional radio technologies, in traditional frequency bands below 6 GHz, are soon reaching their limits. To be able to develop massively deployed, ubiquitous, data-hungry, mobile applications, this study explores the use of higher frequency bands, or so-called millimeter waves in mobile communications. These radio bands above 30 GHz are mostly unoccupied and have dozens of gigahertz of bandwidth available. Moreover, advances in electronics have now made it possible to utilize these bands cost effectively. This thesis studied the millimeter wave wireless channel through conducting the following experiments: (1) two indoor millimeter wave measurement campaigns with directive horn antennas on both link ends, (2) an outdoor vehicular millimeter wave measurement campaign employing a horn antenna and an omni directional antenna, and (3) a railway communications ray-tracing study with directive antennas on both sides. ...

Erich Zoechmann — TU Wien


Spatial Consistency of 3D Channel Models

Developing realistic channel models is one of the greatest challenges for describing wireless communications. Their quality is crucial for accurately predicting the performance of a wireless system. While on the one hand, channel models have to be accurate in describing the physical properties of wave propagation, on the other hand, they have to be as least complex as possible. With the recent emergence of antennas with a massive amount of elements as a promising technology for a further enhancement of spectral efficiency, new channel models that characterize the propagation environment in both azimuth and elevation become necessary. While standardization bodies such as 3rd Generation Partnership Project (3GPP) and International Telecommunications Unit (ITU) have introduced a 3-dimensional (3D) geometry-based stochastic channel model, a system-level modeling has been missing to serve the purpose of further analysis and evaluations. Furthermore, with such a ...

Fjolla Ademaj — TU Wien


Audio-visual processing and content management techniques, for the study of (human) bioacoustics phenomena

The present doctoral thesis aims towards the development of new long-term, multi-channel, audio-visual processing techniques for the analysis of bioacoustics phenomena. The effort is focused on the study of the physiology of the gastrointestinal system, aiming at the support of medical research for the discovery of gastrointestinal motility patterns and the diagnosis of functional disorders. The term "processing" in this case is quite broad, incorporating the procedures of signal processing, content description, manipulation and analysis, that are applied to all the recorded bioacoustics signals, the auxiliary audio-visual surveillance information (for the monitoring of experiments and the subjects' status), and the extracted audio-video sequences describing the abdominal sound-field alterations. The thesis outline is as follows. The main objective of the thesis, which is the technological support of medical research, is presented in the first chapter. A quick problem definition is initially ...

Dimoulas, Charalampos — Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece


Sensing physical fields: Inverse problems for the diffusion equation and beyond

Due to significant advances made over the last few decades in the areas of (wireless) networking, communications and microprocessor fabrication, the use of sensor networks to observe physical phenomena is rapidly becoming commonplace. Over this period, many aspects of sensor networks have been explored, yet a thorough understanding of how to analyse and process the vast amounts of sensor data collected remains an open area of research. This work, therefore, aims to provide theoretical, as well as practical, advances this area. In particular, we consider the problem of inferring certain underlying properties of the monitored phenomena, from our sensor measurements. Within mathematics, this is commonly formulated as an inverse problem; whereas in signal processing, it appears as a (multidimensional) sampling and reconstruction problem. Indeed it is well known that inverse problems are notoriously ill-posed and very demanding to solve; meanwhile ...

Murray-Bruce, John — Imperial College London


Variational Sparse Bayesian Learning: Centralized and Distributed Processing

In this thesis we investigate centralized and distributed variants of sparse Bayesian learning (SBL), an effective probabilistic regression method used in machine learning. Since inference in an SBL model is not tractable in closed form, approximations are needed. We focus on the variational Bayesian approximation, as opposed to others used in the literature, for three reasons: First, it is a flexible general framework for approximate Bayesian inference that estimates probability densities including point estimates as a special case. Second, it has guaranteed convergence properties. And third, it is a deterministic approximation concept that is even applicable for high dimensional problems where non-deterministic sampling methods may be prohibitive. We resolve some inconsistencies in the literature involved in other SBL approximation techniques with regard to a proper Bayesian treatment and the incorporation of a very desired property, namely scale invariance. More specifically, ...

Buchgraber, Thomas — Graz University of Technology


Calculation Of Scalar Optical Diffraction Field From Its Distributed Samples Over The Space

As a three-dimensional viewing technique, holography provides successful three-dimensional perceptions. The technique is based on duplication of the information carrying optical waves which come from an object. Therefore, calculation of the diffraction field due to the object is an important process in digital holography. To have the exact reconstruction of the object, the exact diffraction field created by the object has to be calculated. In the literature, one of the commonly used approach in calculation of the diffraction field due to an object is to superpose the fields created by the elementary building blocks of the object; such procedures may be called as the ``source model" approach and such a computed field can be different from the exact field over the entire space. In this work, we propose four algorithms to calculate the exact diffraction field due to an object. ...

Esmer, Gokhan Bora — Bilkent University


Advanced time-domain methods for nuclear magnetic resonance spectroscopy data analysis

Over the past years magnetic resonance spectroscopy (MRS) has been of significant importance both as a fundamental research technique in different fields, as well as a diagnostic tool in medical environments. With MRS, for example, spectroscopic information, such as the concentrations of chemical substances, can be determined non-invasively. To that end, the signals are first modeled by an appropriate model function and mathematical techniques are subsequently applied to determine the model parameters. In this thesis, signal processing algorithms are developed to quantify in-vivo and ex-vivo MRS signals. These are usually characterized by a poor signal-to-noise ratio, overlapping peaks, deviations from the model function and in some cases the presence of disturbing components (e.g. the residual water in proton spectra). The work presented in this thesis addresses a part of the total effort to provide accurate, efficient and automatic data analysis ...

Vanhamme, Leentje — Katholieke Universiteit Leuven


Development of Fast Machine Learning Algorithms for False Discovery Rate Control in Large-Scale High-Dimensional Data

This dissertation develops false discovery rate (FDR) controlling machine learning algorithms for large-scale high-dimensional data. Ensuring the reproducibility of discoveries based on high-dimensional data is pivotal in numerous applications. The developed algorithms perform fast variable selection tasks in large-scale high-dimensional settings where the number of variables may be much larger than the number of samples. This includes large-scale data with up to millions of variables such as genome-wide association studies (GWAS). Theoretical finite sample FDR-control guarantees based on martingale theory have been established proving the trustworthiness of the developed methods. The practical open-source R software packages TRexSelector and tlars, which implement the proposed algorithms, have been published on the Comprehensive R Archive Network (CRAN). Extensive numerical experiments and real-world problems in biomedical and financial engineering demonstrate the performance in challenging use-cases. The first three main parts of this dissertation present ...

Machkour, Jasin — Technische Universität Darmstadt


Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing

Modern society is undergoing a fundamental change in the way we interact with technology. More and more devices are becoming "smart" by gaining advanced computation capabilities and communication interfaces, from household appliances over transportation systems to large-scale networks like the power grid. Recording, processing, and exchanging digital information is thus becoming increasingly important. As a growing share of devices is nowadays mobile and hence battery-powered, a particular interest in efficient digital signal processing techniques emerges. This thesis contributes to this goal by demonstrating methods for finding efficient algebraic solutions to various applications of multi-channel digital signal processing. These may not always result in the best possible system performance. However, they often come close while being significantly simpler to describe and to implement. The simpler description facilitates a thorough analysis of their performance which is crucial to design robust and reliable ...

Roemer, Florian — Ilmenau University of Technology


Pointwise shape-adaptive DCT image filtering and signal-dependent noise estimation

When an image is acquired by a digital imaging sensor, it is always degraded by some noise. This leads to two basic questions: What are the main characteristics of this noise? How to remove it? These questions in turn correspond to two key problems in signal processing: noise estimation and noise removal (so-called denoising). This thesis addresses both abovementioned problems and provides a number of original and effective contributions for their solution. The first part of the thesis introduces a novel image denoising algorithm based on the low-complexity Shape-Adaptive Discrete Cosine Transform (SA-DCT). By using spatially adaptive supports for the transform, the quality of the filtered image is high, with clean edges and without disturbing artifacts. We further present extensions of this approach to image deblurring, deringing and deblocking, as well as to color image filtering. For all these applications, ...

Foi, Alessandro — Tampere University of Technology


Modeling and Digital Mitigation of Transmitter Imperfections in Radio Communication Systems

To satisfy the continuously growing demands for higher data rates, modern radio communication systems employ larger bandwidths and more complex waveforms. Furthermore, radio devices are expected to support a rich mixture of standards such as cellular networks, wireless local-area networks, wireless personal area networks, positioning and navigation systems, etc. In general, a "smart'' device should be flexible to support all these requirements while being portable, cheap, and energy efficient. These seemingly conflicting expectations impose stringent radio frequency (RF) design challenges which, in turn, call for their proper understanding as well as developing cost-effective solutions to address them. The direct-conversion transceiver architecture is an appealing analog front-end for flexible and multi-standard radio systems. However, it is sensitive to various circuit impairments, and modern communication systems based on multi-carrier waveforms such as Orthogonal Frequency Division Multiplexing (OFDM) and Orthogonal Frequency Division Multiple ...

Kiayani, Adnan — Tampere University of Technology


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Some Contributions to Adaptive Filtering for Acoustic Multiple-Input/Multiple-Output Systems in the Wave Domain

Recently emerging techniques like wave field synthesis (WFS) or Higher-Order Ambisonics (HOA) allow for high-quality spatial audio reproduction, which makes them candidates for the audio reproduction in future telepresence systems or interactive gaming environments with acoustic human-machine interfaces. In such scenarios, acoustic echo cancellation (AEC) will generally be necessary to remove the loudspeaker echoes in the recorded microphone signals before further processing. Moreover, the reproduction quality of WFS or HOA can be improved by adaptive pre-equalization of the loudspeaker signals, as facilitated by listening room equalization (LRE). However, AEC and LRE require adaptive filters, where the large number of reproduction channels of WFS and HOA imply major computational and algorithmic challenges for the implementation of adaptive filters. A technique called wave-domain adaptive filtering (WDAF) promises to master these challenges. However, known literature is still far away from providing sufficient insight ...

Schneider, Martin — Friedrich-Alexander-University Erlangen-Nuremberg


Compressed sensing approaches to large-scale tensor decompositions

Today’s society is characterized by an abundance of data that is generated at an unprecedented velocity. However, much of this data is immediately thrown away by compression or information extraction. In a compressed sensing (CS) setting the inherent sparsity in many datasets is exploited by avoiding the acquisition of superfluous data in the first place. We combine this technique with tensors, or multiway arrays of numerical values, which are higher-order generalizations of vectors and matrices. As the number of entries scales exponentially in the order, tensor problems are often large-scale. We show that the combination of simple, low-rank tensor decompositions with CS effectively alleviates or even breaks the so-called curse of dimensionality. After discussing the larger data fusion optimization framework for coupled and constrained tensor decompositions, we investigate three categories of CS type algorithms to deal with large-scale problems. First, ...

Vervliet, Nico — KU Leuven

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.