Tradeoffs and limitations in statistically based image reconstruction problems (2002)
The main objective of $\gamma$ spectrometry is to characterize the radioactive elements of an unknown source by studying the energy of the emitted $\gamma$ photons. When a photon interacts with a detector, its photonic energy is converted into an electrical pulse, whose integral energy is measured. The histogram obtained by collecting the energies can be used to identify radionucleides and measure their activity. However, at high counting rates, perturbations which are due to the stochastic aspect of the temporal signal can cripple the identification of the radioactive elements. More specifically, since the detector has a finite resolution, close arrival times of photons which can be modeled as an homogeneous Poisson process cause pileups of individual pulses. This phenomenon distorts energy spectra by introducing multiple fake spikes and prolonging artificially the Compton continuum, which can mask spikes of low intensity. The ...
Trigano, Thomas — Télécom Paris Tech
Local Prior Knowledge in Tomography
Computed tomography (CT) is a technique that uses computation to form an image of the inside of an object or person, by combining projections of that object or person. The word tomography is derived from the Greek word tomos, meaning slice. The basis for computed tomography was laid in 1917 by Johann Radon, an Austrian mathematician. Computed tomography has a broad range of applications, the best known being medical imaging (the CT scanner), where X-rays are used for making the projection images. The rst practical application of CT was, however, in astronomy, by Ronald Bracewell in 1956. He used CT to improve the resolution of radio-astronomical observations. The practical applications in this thesis are from electron tomography, where the images are made with an electron microscope, and from preclinical research, where the images are made with a CT scanner. There ...
Roelandts, Tom — University of Antwerp
Direction of Arrival Estimation and Localization Exploiting Sparse and One-Bit Sampling
Data acquisition is a necessary first step in digital signal processing applications such as radar, wireless communications and array processing. Traditionally, this process is performed by uniformly sampling signals at a frequency above the Nyquist rate and converting the resulting samples into digital numeric values through high-resolution amplitude quantization. While the traditional approach to data acquisition is straightforward and extremely well-proven, it may be either impractical or impossible in many modern applications due to the existing fundamental trade-off between sampling rate, amplitude quantization precision, implementation costs, and usage of physical resources, e.g. bandwidth and power consumption. Motivated by this fact, system designers have recently proposed exploiting sparse and few-bit quantized sampling instead of the traditional way of data acquisition in order to reduce implementation costs and usage of physical resources in such applications. However, before transition from the tradition data ...
Saeid Sedighi — University of Luxembourg
Super-Resolution Image Reconstruction Using Non-Linear Filtering Techniques
Super-resolution (SR) is a filtering technique that combines a sequence of under-sampled and degraded low-resolution images to produce an image at a higher resolution. The reconstruction takes advantage of the additional spatio-temporal data available in the sequence of images portraying the same scene. The fundamental problem addressed in super-resolution is a typical example of an inverse problem, wherein multiple low-resolution (LR)images are used to solve for the original high-resolution (HR) image. Super-resolution has already proved useful in many practical cases where multiple frames of the same scene can be obtained, including medical applications, satellite imaging and astronomical observatories. The application of super resolution filtering in consumer cameras and mobile devices shall be possible in the future, especially that the computational and memory resources in these devices are increasing all the time. For that goal, several research problems need to be ...
Trimeche, Mejdi — Tampere University of Technology
Inverse Scattering Procedures for the Reconstruction of One-Dimensional Permittivity Range Profiles
Inverse scattering is relevant to a very large class of problems, where the unknown structure of a scattering object is estimated by measuring the scattered field produced by known probing waves. Therefore, for more than three decades, the promises of non-invasive imaging inspection by electromagnetic probing radiations have been justifying a research interest on these techniques. Several application areas are involved, such as civil and industrial engineering, non-destructive testing and medical imaging as well as subsurface inspection for oil exploration or unexploded devices. In spite of this relevance, most scattering tomography techniques are not reliable enough to solve practical problems. Indeed, the nonlinear relationship between the scattered field and the object function and the robustness of the inversion algorithms are still open issues. In particular, microwave tomography presents a number of specific difficulties that make it much more involved to ...
Genovesi, Simone — University of Pisa
Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data
The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...
Maggioni, Matteo — Tampere University of Technology
Quantization Strategies for Low-Power Communications
Power reduction in digital communication systems can be achieved in many ways. Re- duction of the wordlengths used to represent data and control variables in the digital circuits comprising a communication system is an efective strategy, as register power consumption increases with wordlength. Another strategy is the reduction of the required data trans- mission rate, and hence speed of the digital circuits, by efficient source encoding. In this dissertation, applications of both of these power reduction strategies are investigated. The LMS adaptive filter, for which a myriad of applications exists in digital communi- cation systems, is optimized for performance with a power consumption constraint. This optimization is achieved by an analysis of the effects of wordlength reduction on both perfor- mance -transient and steady-state- as well as power consumption. Analytical formulas for the residual steady-state mean square error (MSE) due ...
Gupta, Riten — University of Michigan
Resistivity distribution estimation, widely known as Electrical Impedance Tomography (EIT), is a non linear ill-posed inverse problem. However, the partial derivative equation ruling this experiment yields no analytical solution for arbitrary conductivity distribution. Thus, solving the forward problem requires an approximation. The Finite Element Method (FEM) provides us with a computationally cheap forward model which preserves the non linear image-data relation and also reveals sufficiently accurate for the inversion. Within the Bayesian approach, Markovian priors on the log-conductivity distribution are introduced for regularization. The neighborhood system is directly derived from the FEM triangular mesh structure. We first propose a maximum a posteriori (MAP) estimation with a Huber-Markov prior which favours smooth distributions while preserving locally discontinuous features. The resulting criterion is minimized with the pseudo-conjugate gradient method. Simulation results reveal significant improvements in terms of robustness to noise, computation rapidity ...
Martin, Thierry — Laboratoire des signaux et systèmes
Robust Methods for Sensing and Reconstructing Sparse Signals
Compressed sensing (CS) is a recently introduced signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with Gaussian-derived reconstruction algorithms, fail to recover a close approximation of the signal. This dissertation develops robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. To achieve this objective, we make use of robust statistics theory to develop appropriate methods addressing the problem of impulsive noise in CS systems. We develop a generalized Cauchy distribution (GCD) ...
Carrillo, Rafael — University of Delaware
Bayesian Signal Processing Techniques for GNSS Receivers: from multipath mitigation to positioning
This dissertation deals with the design of satellite-based navigation receivers. The term Global Navigation Satellite Systems (GNSS) refers to those navigation systems based on a constellation of satellites, which emit ranging signals useful for positioning. Although the american GPS is probably the most popular, the european contribution (Galileo) will be operative soon. Other global and regional systems exist, all with the same objective: aid user's positioning. Initially, the thesis provides the state-of-the-art in GNSS: navigation signals structure and receiver architecture. The design of a GNSS receiver consists of a number of functional blocks. From the antenna to the fi nal position calculation, the design poses challenges in many research areas. Although the Radio Frequency chain of the receiver is commented in the thesis, the main objective of the dissertation is on the signal processing algorithms applied after signal digitation. These ...
Closas, Pau — Universitat Politecnica de Catalunya
A statistical approach to motion estimation
Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...
Moschetti, Fulvio — Swiss Federal Institute of Technology
Exact Unbiased Inverse of the Anscombe Transformation and its Poisson-Gaussian Generalization
Digital image acquisition is an intricate process, which is subject to various errors. Some of these errors are signal-dependent, whereas others are signal-independent. In particular, photon emission and sensing are inherently random physical processes, which in turn substantially contribute to the randomness in the output of the imaging sensor. This signal-dependent noise can be approximated through a Poisson distribution. On the other hand, there are various signal-independent noise sources involved in the image capturing chain, arising from the physical properties and imperfections of the imaging hardware. The noise attributed to these sources is typically modelled collectively as additive white Gaussian noise. Hence, we have three common ways of modelling the noise present in a digital image: Gaussian, Poisson, or Poisson-Gaussian. Image denoising aims at removing or attenuating this noise from the captured image, in order to provide an estimate of ...
Mäkitalo, Markku — Tampere University of Technology
Array Signal Processing Algorithms for Beamforming and Direction Finding
Array processing is an area of study devoted to processing the signals received from an antenna array and extracting information of interest. It has played an important role in widespread applications like radar, sonar, and wireless communications. Numerous adaptive array processing algorithms have been reported in the literature in the last several decades. These algorithms, in a general view, exhibit a trade-off between performance and required computational complexity. In this thesis, we focus on the development of array processing algorithms in the application of beamforming and direction of arrival (DOA) estimation. In the beamformer design, we employ the constrained minimum variance (CMV) and the constrained constant modulus (CCM) criteria to propose full-rank and reduced-rank adaptive algorithms. Specifically, for the full-rank algorithms, we present two low-complexity adaptive step size mechanisms with the CCM criterion for the step size adaptation of the ...
Lei Wang — University of York
Bayesian Compressed Sensing using Alpha-Stable Distributions
During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged ...
Tzagkarakis, George — University of Crete
Bayesian Fusion of Multi-band Images: A Powerful Tool for Super-resolution
Hyperspectral (HS) imaging, which consists of acquiring a same scene in several hundreds of contiguous spectral bands (a three dimensional data cube), has opened a new range of relevant applications, such as target detection [MS02], classification [C.-03] and spectral unmixing [BDPD+12]. However, while HS sensors provide abundant spectral information, their spatial resolution is generally more limited. Thus, fusing the HS image with other highly resolved images of the same scene, such as multispectral (MS) or panchromatic (PAN) images is an interesting problem. The problem of fusing a high spectral and low spatial resolution image with an auxiliary image of higher spatial but lower spectral resolution, also known as multi-resolution image fusion, has been explored for many years [AMV+11]. From an application point of view, this problem is also important as motivated by recent national programs, e.g., the Japanese next-generation space-borne ...
Wei, Qi — University of Toulouse
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.