Inverse Scattering Procedures for the Reconstruction of One-Dimensional Permittivity Range Profiles (2007)
Sparse Signal Recovery From Incomplete And Perturbed Data
Sparse signal recovery consists of algorithms that are able to recover undersampled high dimensional signals accurately. These algorithms require fewer measurements than traditional Shannon/Nyquist sampling theorem demands. Sparse signal recovery has found many applications including magnetic resonance imaging, electromagnetic inverse scattering, radar/sonar imaging, seismic data collection, sensor array processing and channel estimation. The focus of this thesis is on electromagentic inverse scattering problem and joint estimation of the frequency offset and the channel impulse response in OFDM. In the electromagnetic inverse scattering problem, the aim is to find the electromagnetic properties of unknown targets from measured scattered field. The reconstruction of closely placed point-like objects is investigated. The application of the greedy pursuit based sparse recovery methods, OMP and FTB-OMP, is proposed for increasing the reconstruction resolution. The performances of the proposed methods are compared against NESTA and MT-BCS methods. ...
Senyuva, Rifat Volkan — Bogazici University
Local Prior Knowledge in Tomography
Computed tomography (CT) is a technique that uses computation to form an image of the inside of an object or person, by combining projections of that object or person. The word tomography is derived from the Greek word tomos, meaning slice. The basis for computed tomography was laid in 1917 by Johann Radon, an Austrian mathematician. Computed tomography has a broad range of applications, the best known being medical imaging (the CT scanner), where X-rays are used for making the projection images. The rst practical application of CT was, however, in astronomy, by Ronald Bracewell in 1956. He used CT to improve the resolution of radio-astronomical observations. The practical applications in this thesis are from electron tomography, where the images are made with an electron microscope, and from preclinical research, where the images are made with a CT scanner. There ...
Roelandts, Tom — University of Antwerp
Model-based iterative reconstruction algorithms for computed tomography
Computed Tomography (CT) is a powerful tool for non-destructive imaging in which an object's interior is visualized by reconstructing a set of projection images. The technique can be applied in various modalities, ranging from a typical X-ray CT scanner to electron microscopy and synchrotron beamlines. Often, only limited projection data is available, which makes the reconstruction process more dicult and results in reconstruction artifacts if standard techniques are employed. Limited data problems can arise in a variety of applications. In medical CT, the acquisition of only a limited number of projections is bene cial to reduce the radiation dose delivered to the patient. In electron tomography, the sample can only be rotated over a limited tilt range due to mechanical constraints and the number of acquisition angles is often relatively small to avoid beam damage. In dynamic CT, the time ...
Geert Van Eyndhoven — University of Antwerp
Towards In Loco X-ray Computed Tomography
Computed tomography (CT) is a non-invasive imaging technique that allows to reveal the inner structure of an object by combining a series of projection images that were acquired from dierent directions. CT nowadays has a broad range of applications, including those in medicine, preclinical research, nondestructive testing, materials science, etc. One common feature of the tomographic setups used in most applications is the requirement to put an object into a scanner. The rst major disadvantage of such a requirement is the constraint imposed on the size of the object that can be scanned. The second one is the need to move the object which might be di cult or might cause undesirable changes in the object. A possibility to perform in loco, i. e. on site, tomography will open up numerous applications for tomography in nondestructive testing, security, medicine, archaeology ...
Dabravolski, Andrei — University of Antwerp
Tradeoffs and limitations in statistically based image reconstruction problems
Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...
Kragh, Tom — University of Michigan
Sensing physical fields: Inverse problems for the diffusion equation and beyond
Due to significant advances made over the last few decades in the areas of (wireless) networking, communications and microprocessor fabrication, the use of sensor networks to observe physical phenomena is rapidly becoming commonplace. Over this period, many aspects of sensor networks have been explored, yet a thorough understanding of how to analyse and process the vast amounts of sensor data collected remains an open area of research. This work, therefore, aims to provide theoretical, as well as practical, advances this area. In particular, we consider the problem of inferring certain underlying properties of the monitored phenomena, from our sensor measurements. Within mathematics, this is commonly formulated as an inverse problem; whereas in signal processing, it appears as a (multidimensional) sampling and reconstruction problem. Indeed it is well known that inverse problems are notoriously ill-posed and very demanding to solve; meanwhile ...
Murray-Bruce, John — Imperial College London
Reverberation consists of a complex acoustic phenomenon that occurs inside rooms. Many audio signal processing methods, addressing source localization, signal enhancement and other tasks, often assume absence of reverberation. Consequently, reverberant environments are considered challenging as state-ofthe-art methods can perform poorly. The acoustics of a room can be described using a variety of mathematical models, among which, physical models are the most complete and accurate. The use of physical models in audio signal processing methods is often non-trivial since it can lead to ill-posed inverse problems. These inverse problems require proper regularization to achieve meaningful results and involve the solution of computationally intensive large-scale optimization problems. Recently, however, sparse regularization has been applied successfully to inverse problems arising in different scientific areas. The increased computational power of modern computers and the development of new efficient optimization algorithms makes it possible ...
Antonello, Niccolò — KU Leuven
The solution to many image restoration and reconstruction problems is often defined as the minimizer of a penalized criterion that accounts simultaneously for the data and the prior. This thesis deals more specifically with the minimization of edge-preserving penalized criteria. We focus on algorithms for large-scale problems. The minimization of penalized criteria can be addressed using a half-quadratic approach (HQ). Converging HQ algorithms have been proposed. However, their numerical cost is generally too high for large-scale problems. An alternative is to implement inexact HQ algorithms. Nonlinear conjugate gradient algorithms can also be considered using scalar HQ algorithms for the line search (NLCG+HQ1D). Some issues on the convergence of the aforementioned algorithms remained open until now. In this thesis we : - Prove the convergence of inexact HQ algorithms and NLCG+HQ1D. - Point out strong links between HQ algorithms and NLCG+HQ1D. ...
Labat, Christian — IRCCyN, Nantes, France
Second-Order Multidimensional Independent Component Analysis: Theory and Methods
Independent component analysis (ICA) and blind source separation (BSS) deal with extracting a number of mutually independent elements from a set of observed linear mixtures. Motivated by various applications, this work considers a more general and more flexible model: the sources can be partitioned into groups exhibiting dependence within a given group but independence between two different groups. We argue that this is tantamount to considering multidimensional components, as opposed to the standard ICA case which is restricted to one-dimensional components. In this work, we focus on second-order methods to separate statistically-independent multidimensional components from their linear instantaneous mixture. The purpose of this work is to provide theoretical answers to questions which so far have been discussed mainly in the empirical domain. Namely, we provide a closed-form expression for the figure of merit, the mean square error (MSE), for multidimensional ...
Lahat, Dana — Tel Aviv University
Digital Pre-distortion of Microwave Power Amplifiers
With the advent of spectrally efficient wireless communication systems employing modulation schemes with varying amplitude of the communication signal, linearisation techniques for nonlinear microwave power amplifiers have gained significant interest. The availability of fast and cheap digital processing technology makes digital pre-distortion an attractive candidate as a means for power amplifier linearisation since it promises high power efficiency and fleexibility. Digital pre-distortion is further in line with the current efforts towards software defined radio systems, where a principal aim is to substitute costly and inflexible analogue circuitry with cheap and reprogrammable digital circuitry. Microwave power amplifiers are most efficient in terms of delivered microwave output power vs. supplied power if driven near the saturation point. In this operational mode, the amplifier behaves as a nonlinear device, which introduces undesired distortions in the information bear- ing microwave signal. These nonlinear distortions ...
Aschbacher, E. — Vienna University of Technology
Signal processing of FMCW Synthetic Aperture Radar data
In the field of airborne earth observation there is special attention to compact, cost effective, high resolution imaging sensors. Such sensors are foreseen to play an important role in small-scale remote sensing applications, such as the monitoring of dikes, watercourses, or highways. Furthermore, such sensors are of military interest; reconnaissance tasks could be performed with small unmanned aerial vehicles (UAVs), reducing in this way the risk for one's own troops. In order to be operated from small, even unmanned, aircrafts, such systems must consume little power and be small enough to fulfill the usually strict payload requirements. Moreover, to be of interest for the civil market, cost effectiveness is mandatory. Frequency Modulated Continuous Wave (FMCW) radar systems are generally compact and relatively cheap to purchase and to exploit. They consume little power and, due to the fact that they are ...
Meta, Adriano — Delft University of Technology
Contactless and less-constrained palmprint recognition
Biometric systems consist in the combination of devices, algorithms, and procedures used to recognize the individuals based on the characteristics, physical or behavioral, of their persons. These characteristics are called biometric traits. Nowadays, biometric technologies are becoming more and more widespread, and many people use biometric systems daily. However, in some cases the procedures used for the collection of the biometric traits need the cooperation of the user, controlled environments, illuminations perceived as unpleasant, too strong, or harmful, or the contact of the body with a sensor. For these reasons, techniques for the contactless and less-constrained biometric recognition are being researched, in order to increase the usability and social acceptance of biometric systems, and increase the fields of application of biometric technologies. In this context, the palmprint is a biometric trait whose acquisition is generally well accepted by the users. ...
Genovese, Angelo — UniversitĂ degli Studi di Milano
In this thesis, the power of Machine Learning (ML) algorithms is combined with brain connectivity patterns, using Magnetic Resonance Imaging (MRI), for classification and prediction of Multiple Sclerosis (MS). White Matter (WM) as well as Grey Matter (GM) graphs are studied as connectome data types. The thesis addresses three main research objectives. The first objective aims to generate realistic brain connectomes data for improving the classification of MS clinical profiles in cases of data scarcity and class imbalance. To solve the problem of limited and imbalanced data, a Generative Adversarial Network (GAN) was developed for the generation of realistic and biologically meaningful connec- tomes. This network achieved a 10% better MS classification performance compared to classical approaches. As second research objective, we aim to improve classification of MS clinical profiles us- ing morphological features only extracted from GM brain tissue. ...
Barile, Berardino — KU Leuven
In the thesis, various aspects of deconvolution of ultrasonic pulse-echo signals in nondestructive testing are treated. The deconvolution problem is formulated as estimation of a reflection sequence which is the impulse characteristic of the inspected object and the estimation is performed using either maximum a posteriori (MAP) or linear minimum mean square error (MMSE) estimators. A multivariable model is proposed for a certain multiple transducer setup allowing for frequency diversity, thereby improving the estimation accuracy. Using the MAP estimator three different material types were treated, with varying amount of sparsity in the reflection sequences. The Gaussian distribution is used for modelling materials containing a large number of small scatters. The Bernoulli--Gaussian distribution is used for sparse data obtained from layered structures and a genetic algorithm approach is proposed for optimizing the corresponding MAP criterion. Sequences with intermediate sparsity suitable of ...
Olofsson, Tomas — Uppsala University
Efficient representation, generation and compression of digital holograms
Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...
Blinder, David — Vrije Universiteit Brussel
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.