Local Prior Knowledge in Tomography

Computed tomography (CT) is a technique that uses computation to form an image of the inside of an object or person, by combining projections of that object or person. The word tomography is derived from the Greek word tomos, meaning slice. The basis for computed tomography was laid in 1917 by Johann Radon, an Austrian mathematician. Computed tomography has a broad range of applications, the best known being medical imaging (the CT scanner), where X-rays are used for making the projection images. The rst practical application of CT was, however, in astronomy, by Ronald Bracewell in 1956. He used CT to improve the resolution of radio-astronomical observations. The practical applications in this thesis are from electron tomography, where the images are made with an electron microscope, and from preclinical research, where the images are made with a CT scanner. There ...

Roelandts, Tom — University of Antwerp


Towards In Loco X-ray Computed Tomography

Computed tomography (CT) is a non-invasive imaging technique that allows to reveal the inner structure of an object by combining a series of projection images that were acquired from dierent directions. CT nowadays has a broad range of applications, including those in medicine, preclinical research, nondestructive testing, materials science, etc. One common feature of the tomographic setups used in most applications is the requirement to put an object into a scanner. The rst major disadvantage of such a requirement is the constraint imposed on the size of the object that can be scanned. The second one is the need to move the object which might be di cult or might cause undesirable changes in the object. A possibility to perform in loco, i. e. on site, tomography will open up numerous applications for tomography in nondestructive testing, security, medicine, archaeology ...

Dabravolski, Andrei — University of Antwerp


Inverse Scattering Procedures for the Reconstruction of One-Dimensional Permittivity Range Profiles

Inverse scattering is relevant to a very large class of problems, where the unknown structure of a scattering object is estimated by measuring the scattered field produced by known probing waves. Therefore, for more than three decades, the promises of non-invasive imaging inspection by electromagnetic probing radiations have been justifying a research interest on these techniques. Several application areas are involved, such as civil and industrial engineering, non-destructive testing and medical imaging as well as subsurface inspection for oil exploration or unexploded devices. In spite of this relevance, most scattering tomography techniques are not reliable enough to solve practical problems. Indeed, the nonlinear relationship between the scattered field and the object function and the robustness of the inversion algorithms are still open issues. In particular, microwave tomography presents a number of specific difficulties that make it much more involved to ...

Genovesi, Simone — University of Pisa


Numerical Approaches for Solving the Combined Reconstruction and Registration of Digital Breast Tomosynthesis

Heavy demands on the development of medical imaging modalities for breast cancer detection have been witnessed in the last three decades in an attempt to reduce the mortality associated with the disease. Recently, Digital Breast Tomosynthesis (DBT) shows its promising in the early diagnosis when lesions are small. In particular, it offers potential benefits over X-ray mammography - the current modality of choice for breast screening - of increased sensitivity and specificity for comparable X-ray dose, speed, and cost. An important feature of DBT is that it provides a pseudo-3D image of the breast. This is of particular relevance for heterogeneous dense breasts of young women, which can inhibit detection of cancer using conventional mammography. In the same way that it is difficult to see a bird from the edge of the forest, detecting cancer in a conventional 2D mammogram ...

Yang, Guang — University College London


Tradeoffs and limitations in statistically based image reconstruction problems

Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...

Kragh, Tom — University of Michigan


Towards an Automated Portable Electroencephalography-based System for Alzheimer’s Disease Diagnosis

Alzheimer’s disease (AD) is a neurodegenerative terminal disorder that accounts for nearly 70% of dementia cases worldwide. Global dementia incidence is projected to 75 million cases by 2030, with the majority of the affected individuals coming from low- and medium- income countries. Although there is no cure for AD, early diagnosis can improve the quality of life of AD patients and their caregivers. Currently, AD diagnosis is carried out using mental status examinations, expensive neuroimaging scans, and invasive laboratory tests, all of which render the diagnosis time-consuming and costly. Notwithstanding, over the last decade electroencephalography (EEG), specifically resting-state EEG (rsEEG), has emerged as an alternative technique for AD diagnosis with accuracies inline with those obtained with more expensive neuroimaging tools, such as magnetic resonance imaging (MRI), computed tomography (CT) and positron emission tomography (PET). However the use of rsEEG for ...

Cassani, Raymundo — Université du Québec, Institut national de la recherche scientifique


Compressive Sensing Based Candidate Detector and its Applications to Spectrum Sensing and Through-the-Wall Radar Imaging

Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...

Lagunas, Eva — Universitat Politecnica de Catalunya


Digital Processing Based Solutions for Life Science Engineering Recognition Problems

The field of Life Science Engineering (LSE) is rapidly expanding and predicted to grow strongly in the next decades. It covers areas of food and medical research, plant and pests’ research, and environmental research. In each research area, engineers try to find equations that model a certain life science problem. Once found, they research different numerical techniques to solve for the unknown variables of these equations. Afterwards, solution improvement is examined by adopting more accurate conventional techniques, or developing novel algorithms. In particular, signal and image processing techniques are widely used to solve those LSE problems require pattern recognition. However, due to the continuous evolution of the life science problems and their natures, these solution techniques can not cover all aspects, and therefore demanding further enhancement and improvement. The thesis presents numerical algorithms of digital signal and image processing to ...

Hussein, Walid — Technische Universität München


Video person recognition strategies using head motion and facial appearance

In this doctoral dissertation, we principally explore the use of the temporal information available in video sequences for person and gender recognition; in particular, we focus on the analysis of head and facial motion, and their potential application as biometric identifiers. We also investigate how to exploit as much video information as possible for the automatic recognition; more precisely, we examine the possibility of integrating the head and mouth motion information with facial appearance into a multimodal biometric system, and we study the extraction of novel spatio-temporal facial features for recognition. We initially present a person recognition system that exploits the unconstrained head motion information, extracted by tracking a few facial landmarks in the image plane. In particular, we detail how each video sequence is firstly pre-processed by semiautomatically detecting the face, and then automatically tracking the facial landmarks over ...

Matta, Federico — Eurécom / Multimedia communications


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Film and Video Restoration using Rank-Order Models

This thesis introduces the rank-order model and investigates its use in several image restoration problems. More commonly used as filters, the rank-order operators are here employed as predictors. A Laplacian excitation sequence is chosen to complete the model. Images are generated with the model and compared with those formed with an AR model. A multidimensional rankorder model is formed from vector medians for use with multidimensional image data. The first application using the rank-order model is an impulsive noise detector. This exploits the notion of ‘multimodality’ in the histogram of a difference image of the degraded image and a rank-order filtered version. It uses the EM algorithm and a mixture model to automatically determine thresholds for detecting the impulsive noise. This method compares well with other detection methods, which require manual setting of thresholds, and to stack filtering, which requires ...

Armstrong, Steven — University of Cambridge


Signal processing algorithms for wireless acoustic sensor networks

Recent academic developments have initiated a paradigm shift in the way spatial sensor data can be acquired. Traditional localized and regularly arranged sensor arrays are replaced by sensor nodes that are randomly distributed over the entire spatial field, and which communicate with each other or with a master node through wireless communication links. Together, these nodes form a so-called ‘wireless sensor network’ (WSN). Each node of a WSN has a local sensor array and a signal processing unit to perform computations on the acquired data. The advantage of WSNs compared to traditional (wired) sensor arrays, is that many more sensors can be used that physically cover the full spatial field, which typically yields more variety (and thus more information) in the signals. It is likely that future data acquisition, control and physical monitoring, will heavily rely on this type of ...

Bertrand, Alexander — Katholieke Universiteit Leuven


Advances in unobtrusive monitoring of sleep apnea using machine learning

Obstructive sleep apnea (OSA) is among the most prevalent sleep disorders, which is estimated to affect 6 %−19 % of women and 13 %−33 % of men. Besides daytime sleepiness, impaired cognitive functioning and an increased risk for accidents, OSA may lead to obesity, diabetes and cardiovascular diseases (CVD) on the long term. Its prevalence is only expected to rise, as it is linked to aging and excessive body fat. Nevertheless, many patients remain undiagnosed and untreated due to the cumbersome clinical diagnostic procedures. For this, the patient is required to sleep with an extensive set of body attached sensors. In addition, the recordings only provide a single night perspective on the patient in an uncomfortable, and often unknown, environment. Thus, large scale monitoring at home is desired with comfortable sensors, which can stay in place for several nights. To ...

Huysmans, Dorien — KU Leuven


Exploiting Sparsity for Efficient Compression and Analysis of ECG and Fetal-ECG Signals

Over the last decade there has been an increasing interest in solutions for the continuous monitoring of health status with wireless, and in particular, wearable devices that provide remote analysis of physiological data. The use of wireless technologies have introduced new problems such as the transmission of a huge amount of data within the constraint of limited battery life devices. The design of an accurate and energy efficient telemonitoring system can be achieved by reducing the amount of data that should be transmitted, which is still a challenging task on devices with both computational and energy constraints. Furthermore, it is not sufficient merely to collect and transmit data, and algorithms that provide real-time analysis are needed. In this thesis, we address the problems of compression and analysis of physiological data using the emerging frameworks of Compressive Sensing (CS) and sparse ...

Da Poian, Giulia — University of Udine


Direction of Arrival Estimation and Localization Exploiting Sparse and One-Bit Sampling

Data acquisition is a necessary first step in digital signal processing applications such as radar, wireless communications and array processing. Traditionally, this process is performed by uniformly sampling signals at a frequency above the Nyquist rate and converting the resulting samples into digital numeric values through high-resolution amplitude quantization. While the traditional approach to data acquisition is straightforward and extremely well-proven, it may be either impractical or impossible in many modern applications due to the existing fundamental trade-off between sampling rate, amplitude quantization precision, implementation costs, and usage of physical resources, e.g. bandwidth and power consumption. Motivated by this fact, system designers have recently proposed exploiting sparse and few-bit quantized sampling instead of the traditional way of data acquisition in order to reduce implementation costs and usage of physical resources in such applications. However, before transition from the tradition data ...

Saeid Sedighi — University of Luxembourg

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.