Statistical signal processing of spectrometric data: study of the pileup correction for energy spectra applied to Gamma spectrometry

The main objective of $\gamma$ spectrometry is to characterize the radioactive elements of an unknown source by studying the energy of the emitted $\gamma$ photons. When a photon interacts with a detector, its photonic energy is converted into an electrical pulse, whose integral energy is measured. The histogram obtained by collecting the energies can be used to identify radionucleides and measure their activity. However, at high counting rates, perturbations which are due to the stochastic aspect of the temporal signal can cripple the identification of the radioactive elements. More specifically, since the detector has a finite resolution, close arrival times of photons which can be modeled as an homogeneous Poisson process cause pileups of individual pulses. This phenomenon distorts energy spectra by introducing multiple fake spikes and prolonging artificially the Compton continuum, which can mask spikes of low intensity. The ...

Trigano, Thomas — Télécom Paris Tech


Direction of Arrival Estimation and Localization Exploiting Sparse and One-Bit Sampling

Data acquisition is a necessary first step in digital signal processing applications such as radar, wireless communications and array processing. Traditionally, this process is performed by uniformly sampling signals at a frequency above the Nyquist rate and converting the resulting samples into digital numeric values through high-resolution amplitude quantization. While the traditional approach to data acquisition is straightforward and extremely well-proven, it may be either impractical or impossible in many modern applications due to the existing fundamental trade-off between sampling rate, amplitude quantization precision, implementation costs, and usage of physical resources, e.g. bandwidth and power consumption. Motivated by this fact, system designers have recently proposed exploiting sparse and few-bit quantized sampling instead of the traditional way of data acquisition in order to reduce implementation costs and usage of physical resources in such applications. However, before transition from the tradition data ...

Saeid Sedighi — University of Luxembourg


Local Prior Knowledge in Tomography

Computed tomography (CT) is a technique that uses computation to form an image of the inside of an object or person, by combining projections of that object or person. The word tomography is derived from the Greek word tomos, meaning slice. The basis for computed tomography was laid in 1917 by Johann Radon, an Austrian mathematician. Computed tomography has a broad range of applications, the best known being medical imaging (the CT scanner), where X-rays are used for making the projection images. The rst practical application of CT was, however, in astronomy, by Ronald Bracewell in 1956. He used CT to improve the resolution of radio-astronomical observations. The practical applications in this thesis are from electron tomography, where the images are made with an electron microscope, and from preclinical research, where the images are made with a CT scanner. There ...

Roelandts, Tom — University of Antwerp


Super-Resolution Image Reconstruction Using Non-Linear Filtering Techniques

Super-resolution (SR) is a filtering technique that combines a sequence of under-sampled and degraded low-resolution images to produce an image at a higher resolution. The reconstruction takes advantage of the additional spatio-temporal data available in the sequence of images portraying the same scene. The fundamental problem addressed in super-resolution is a typical example of an inverse problem, wherein multiple low-resolution (LR)images are used to solve for the original high-resolution (HR) image. Super-resolution has already proved useful in many practical cases where multiple frames of the same scene can be obtained, including medical applications, satellite imaging and astronomical observatories. The application of super resolution filtering in consumer cameras and mobile devices shall be possible in the future, especially that the computational and memory resources in these devices are increasing all the time. For that goal, several research problems need to be ...

Trimeche, Mejdi — Tampere University of Technology


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Inverse Scattering Procedures for the Reconstruction of One-Dimensional Permittivity Range Profiles

Inverse scattering is relevant to a very large class of problems, where the unknown structure of a scattering object is estimated by measuring the scattered field produced by known probing waves. Therefore, for more than three decades, the promises of non-invasive imaging inspection by electromagnetic probing radiations have been justifying a research interest on these techniques. Several application areas are involved, such as civil and industrial engineering, non-destructive testing and medical imaging as well as subsurface inspection for oil exploration or unexploded devices. In spite of this relevance, most scattering tomography techniques are not reliable enough to solve practical problems. Indeed, the nonlinear relationship between the scattered field and the object function and the robustness of the inversion algorithms are still open issues. In particular, microwave tomography presents a number of specific difficulties that make it much more involved to ...

Genovesi, Simone — University of Pisa


Bayesian resolution of the non linear inverse problem of Electrical Impedance Tomography with Finite Element modeling

Resistivity distribution estimation, widely known as Electrical Impedance Tomography (EIT), is a non linear ill-posed inverse problem. However, the partial derivative equation ruling this experiment yields no analytical solution for arbitrary conductivity distribution. Thus, solving the forward problem requires an approximation. The Finite Element Method (FEM) provides us with a computationally cheap forward model which preserves the non linear image-data relation and also reveals sufficiently accurate for the inversion. Within the Bayesian approach, Markovian priors on the log-conductivity distribution are introduced for regularization. The neighborhood system is directly derived from the FEM triangular mesh structure. We first propose a maximum a posteriori (MAP) estimation with a Huber-Markov prior which favours smooth distributions while preserving locally discontinuous features. The resulting criterion is minimized with the pseudo-conjugate gradient method. Simulation results reveal significant improvements in terms of robustness to noise, computation rapidity ...

Martin, Thierry — Laboratoire des signaux et systèmes


Three dimensional shape modeling: segmentation, reconstruction and registration

Accounting for uncertainty in three-dimensional (3D) shapes is important in a large number of scientific and engineering areas, such as biometrics, biomedical imaging, and data mining. It is well known that 3D polar shaped objects can be represented by Fourier descriptors such as spherical harmonics and double Fourier series. However, the statistics of these spectral shape models have not been widely explored. This thesis studies several areas involved in 3D shape modeling, including random field models for statistical shape modeling, optimal shape filtering, parametric active contours for object segmentation and surface reconstruction. It also investigates multi-modal image registration with respect to tumor activity quantification. Spherical harmonic expansions over the unit sphere not only provide a low dimensional polarimetric parameterization of stochastic shape, but also correspond to the Karhunen-Lo´eve (K-L) expansion of any isotropic random field on the unit sphere. Spherical ...

Li, Jia — University of Michigan


Radial Basis Function Network Robust Learning Algorithms in Computer Vision Applications

This thesis introduces new learning algorithms for Radial Basis Function (RBF) networks. RBF networks is a feed-forward two-layer neural network used for functional approximation or pattern classification applications. The proposed training algorithms are based on robust statistics. Their theoretical performance has been assessed and compared with that of classical algorithms for training RBF networks. The applications of RBF networks described in this thesis consist of simultaneously modeling moving object segmentation and optical flow estimation in image sequences and 3-D image modeling and segmentation. A Bayesian classifier model is used for the representation of the image sequence and 3-D images. This employs an energy based description of the probability functions involved. The energy functions are represented by RBF networks whose inputs are various features drawn from the images and whose outputs are objects. The hidden units embed kernel functions. Each kernel ...

Bors, Adrian G. — Aristotle University of Thessaloniki


Exact Unbiased Inverse of the Anscombe Transformation and its Poisson-Gaussian Generalization

Digital image acquisition is an intricate process, which is subject to various errors. Some of these errors are signal-dependent, whereas others are signal-independent. In particular, photon emission and sensing are inherently random physical processes, which in turn substantially contribute to the randomness in the output of the imaging sensor. This signal-dependent noise can be approximated through a Poisson distribution. On the other hand, there are various signal-independent noise sources involved in the image capturing chain, arising from the physical properties and imperfections of the imaging hardware. The noise attributed to these sources is typically modelled collectively as additive white Gaussian noise. Hence, we have three common ways of modelling the noise present in a digital image: Gaussian, Poisson, or Poisson-Gaussian. Image denoising aims at removing or attenuating this noise from the captured image, in order to provide an estimate of ...

Mäkitalo, Markku — Tampere University of Technology


Array Signal Processing Algorithms for Beamforming and Direction Finding

Array processing is an area of study devoted to processing the signals received from an antenna array and extracting information of interest. It has played an important role in widespread applications like radar, sonar, and wireless communications. Numerous adaptive array processing algorithms have been reported in the literature in the last several decades. These algorithms, in a general view, exhibit a trade-off between performance and required computational complexity. In this thesis, we focus on the development of array processing algorithms in the application of beamforming and direction of arrival (DOA) estimation. In the beamformer design, we employ the constrained minimum variance (CMV) and the constrained constant modulus (CCM) criteria to propose full-rank and reduced-rank adaptive algorithms. Specifically, for the full-rank algorithms, we present two low-complexity adaptive step size mechanisms with the CCM criterion for the step size adaptation of the ...

Lei Wang — University of York


Robust Speech Recognition on Intelligent Mobile Devices with Dual-Microphone

Despite the outstanding progress made on automatic speech recognition (ASR) throughout the last decades, noise-robust ASR still poses a challenge. Tackling with acoustic noise in ASR systems is more important than ever before for a twofold reason: 1) ASR technology has begun to be extensively integrated in intelligent mobile devices (IMDs) such as smartphones to easily accomplish different tasks (e.g. search-by-voice), and 2) IMDs can be used anywhere at any time, that is, under many different acoustic (noisy) conditions. On the other hand, with the aim of enhancing noisy speech, IMDs have begun to embed small microphone arrays, i.e. microphone arrays comprised of a few sensors close each other. These multi-sensor IMDs often embed one microphone (usually at their rear) intended to capture the acoustic environment more than the speaker’s voice. This is the so-called secondary microphone. While classical microphone ...

López-Espejo, Iván — University of Granada


Quantization Strategies for Low-Power Communications

Power reduction in digital communication systems can be achieved in many ways. Re- duction of the wordlengths used to represent data and control variables in the digital circuits comprising a communication system is an efective strategy, as register power consumption increases with wordlength. Another strategy is the reduction of the required data trans- mission rate, and hence speed of the digital circuits, by efficient source encoding. In this dissertation, applications of both of these power reduction strategies are investigated. The LMS adaptive filter, for which a myriad of applications exists in digital communi- cation systems, is optimized for performance with a power consumption constraint. This optimization is achieved by an analysis of the effects of wordlength reduction on both perfor- mance -transient and steady-state- as well as power consumption. Analytical formulas for the residual steady-state mean square error (MSE) due ...

Gupta, Riten — University of Michigan


Bayesian Compressed Sensing using Alpha-Stable Distributions

During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...

Tzagkarakis, George — University of Crete


Robust Signal Processing with Applications to Positioning and Imaging

This dissertation investigates robust signal processing and machine learning techniques, with the objective of improving the robustness of two applications against various threats, namely Global Navigation Satellite System (GNSS) based positioning and satellite imaging. GNSS technology is widely used in different fields, such as autonomous navigation, asset tracking, or smartphone positioning, while the satellite imaging plays a central role in monitoring, detecting and estimating the intensity of key natural phenomena, such as flooding prediction and earthquake detection. Considering the use of both GNSS positioning and satellite imaging in critical and safety-of-life applications, it is necessary to protect those two technologies from either intentional or unintentional threats. In the real world, the common threats to GNSS technology include multipath propagation and intentional/unintentional interferences. This thesis investigates methods to mitigate the influence of such sources of error, with the final objective of ...

Li, Haoqing — Northeastern University

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.