Segmentation par modèle déformable surfacique localement régularisé par spline (2007)
Three dimensional shape modeling: segmentation, reconstruction and registration
Accounting for uncertainty in three-dimensional (3D) shapes is important in a large number of scientific and engineering areas, such as biometrics, biomedical imaging, and data mining. It is well known that 3D polar shaped objects can be represented by Fourier descriptors such as spherical harmonics and double Fourier series. However, the statistics of these spectral shape models have not been widely explored. This thesis studies several areas involved in 3D shape modeling, including random field models for statistical shape modeling, optimal shape filtering, parametric active contours for object segmentation and surface reconstruction. It also investigates multi-modal image registration with respect to tumor activity quantification. Spherical harmonic expansions over the unit sphere not only provide a low dimensional polarimetric parameterization of stochastic shape, but also correspond to the Karhunen-Lo´eve (K-L) expansion of any isotropic random field on the unit sphere. Spherical ...
Li, Jia — University of Michigan
Domain-informed signal processing with application to analysis of human brain functional MRI data
Standard signal processing techniques are implicitly based on the assumption that the signal lies on a regular, homogeneous domain. In practice, however, many signals lie on an irregular or inhomogeneous domain. An application area where data are naturally defined on an irregular or inhomogeneous domain is human brain neuroimaging. The goal in neuroimaging is to map the structure and function of the brain using imaging techniques. In particular, functional magnetic resonance imaging (fMRI) is a technique that is conventionally used in non-invasive probing of human brain function. This doctoral dissertation deals with the development of signal processing schemes that adapt to the domain of the signal. It consists of four papers that in different ways deal with exploiting knowledge of the signal domain to enhance the processing of signals. In each paper, special focus is given to the analysis of ...
Behjat, Hamid — Lund University
In this doctoral thesis several scale-free texture segmentation procedures based on two fractal attributes, the Hölder exponent, measuring the local regularity of a texture, and local variance, are proposed.A piecewise homogeneous fractal texture model is built, along with a synthesis procedure, providing images composed of the aggregation of fractal texture patches with known attributes and segmentation. This synthesis procedure is used to evaluate the proposed methods performance.A first method, based on the Total Variation regularization of a noisy estimate of local regularity, is illustrated and refined thanks to a post-processing step consisting in an iterative thresholding and resulting in a segmentation.After evidencing the limitations of this first approach, deux segmentation methods, with either "free" or "co-located" contours, are built, taking in account jointly the local regularity and the local variance.These two procedures are formulated as convex nonsmooth functional minimization problems.We ...
Pascal, Barbara — École Normale Supérieure de Lyon
Group-Sparse Regression - With Applications in Spectral Analysis and Audio Signal Processing
This doctorate thesis focuses on sparse regression, a statistical modeling tool for selecting valuable predictors in underdetermined linear models. By imposing different constraints on the structure of the variable vector in the regression problem, one obtains estimates which have sparse supports, i.e., where only a few of the elements in the response variable have non-zero values. The thesis collects six papers which, to a varying extent, deals with the applications, implementations, modifications, translations, and other analysis of such problems. Sparse regression is often used to approximate additive models with intricate, non-linear, non-smooth or otherwise problematic functions, by creating an underdetermined model consisting of candidate values for these functions, and linear response variables which selects among the candidates. Sparse regression is therefore a widely used tool in applications such as, e.g., image processing, audio processing, seismological and biomedical modeling, but is ...
Kronvall, Ted — Lund University
In biological research images are extensively used to monitor growth, dynamics and changes in biological specimen, such as cells or plants. Many of these images are used solely for observation or are manually annotated by an expert. In this dissertation we discuss several methods to automate the annotating and analysis of bio-images. Two large clusters of methods have been investigated and developed. A first set of methods focuses on the automatic delineation of relevant objects in bio-images, such as individual cells in microscopic images. Since these methods should be useful for many different applications, e.g. to detect and delineate different objects (cells, plants, leafs, ...) in different types of images (different types of microscopes, regular colour photographs, ...), the methods should be easy to adjust. Therefore we developed a methodology relying on probability theory, where all required parameters can easily ...
De Vylder, Jonas — Ghent University
Toward sparse and geometry adapted video approximations
Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion ...
Divorra Escoda, Oscar — EPFL / Signal Processing Institute
Video Object Tracking with Feedback of Performance Measures
The task of segmentation and tracking of objects in a video sequence is an important high-level video processing problem for object-based video manipulation and representation. This task involves utilization of many low-level pre-processing tasks such as image segmentation and motion estimation. It is also very important to assess the performance of the video object segmentation and tracking algorithms quantitatively and objectively. Performance evaluation measures are proposed both when the ground-truth segmentation maps are available and when they are unavailable. A semi-automatic video object tracking method is introduced that uses the proposed performance evaluation measures in a feedback loop to adjust its parameters locally on the object boundary. New low-level image segmentation and motion estimation algorithms, namely, an illumination invariant fuzzy image segmentation algorithm and a motion estimation estimation algorithm in the frequency domain using fuzzy c-planes clustering are also presented ...
Erdem, Cigdem Eroglu — Bogazici University
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications ...
Lagunas, Eva — Universitat Politecnica de Catalunya
Facial features segmentation, analysis and recognition of facial expressions by the Transferable Belief Model The aim of this work is the analysis and the classification of facial expressions. Experiments in psychology show that human is able to recognize the emotions based on the visualization of the temporal evolution of some characteristic fiducial points. Thus we firstly propose an automatic system for the extraction of the permanent facial features (eyes, eyebrows and lips). In this work we are interested in the problem of the segmentation of the eyes and the eyebrows. The segmentation of lips contours is based on a previous work developed in the laboratory. The proposed algorithm for eyes and eyebrows contours segmentation consists of three steps: firstly, the definition of parametric models to fit as accurate as possible the contour of each feature; then, a whole set of ...
Hammal, Zakia — GIPSA-lab/DIS
Signal Quantization and Approximation Algorithms for Federated Learning
Distributed signal or information processing using Internet of Things (IoT), facilitates real-time monitoring of signals, for example, environmental pollutants, health indicators, and electric energy consumption in a smart city. Despite the promising capabilities of IoTs, these distributed deployments often face the challenge of data privacy and communication rate constraints. In traditional machine learning, training data is moved to a data center, which requires massive data movement from distributed IoT devices to a third-party location, thus raising concerns over privacy and inefficient use of communication resources. Moreover, the growing network size, model size, and data volume combined lead to unusual complexity in the design of optimization algorithms beyond the compute capability of a single device. This necessitates novel system architectures to ensure stable and secure operations of such networks. Federated learning (FL) architecture, a novel distributed learning paradigm introduced by McMahan ...
A, Vijay — Indian Institute of Technology Bombay
Density-based shape descriptors and similarity learning for 3D object retrieval
Next generation search engines will enable query formulations, other than text, relying on visual information encoded in terms of images and shapes. The 3D search technology, in particular, targets specialized application domains ranging from computer aided-design and manufacturing to cultural heritage archival and presentation. Content-based retrieval research aims at developing search engines that would allow users to perform a query by similarity of content. This thesis deals with two fundamentals problems in content-based 3D object retrieval: (1) How to describe a 3D shape to obtain a reliable representative for the subsequent task of similarity search? (2) How to supervise the search process to learn inter-shape similarities for more effective and semantic retrieval? Concerning the first problem, we develop a novel 3D shape description scheme based on probability density of multivariate local surface features. We constructively obtain local characterizations of 3D ...
Akgul, Ceyhun Burak — Bogazici University and Telecom ParisTech
Visual ear detection and recognition in unconstrained environments
Automatic ear recognition systems have seen increased interest over recent years due to multiple desirable characteristics. Ear images used in such systems can typically be extracted from profile head shots or video footage. The acquisition procedure is contactless and non-intrusive, and it also does not depend on the cooperation of the subjects. In this regard, ear recognition technology shares similarities with other image-based biometric modalities. Another appealing property of ear biometrics is its distinctiveness. Recent studies even empirically validated existing conjectures that certain features of the ear are distinct for identical twins. This fact has significant implications for security-related applications and puts ear images on a par with epigenetic biometric modalities, such as the iris. Ear images can also supplement other biometric modalities in automatic recognition systems and provide identity cues when other information is unreliable or even unavailable. In ...
Emeršič, Žiga — University of Ljubljana, Faculty of Computer and Information Science
Signal and Image Processing Techniques for Image-Based Photometry With Application to Diabetes Care
This PhD thesis addresses the problem of measuring blood glucose from a photometric measurement setup that requires blood samples in the nano litre-range, which is several orders of magnitude less than the state of the art. The chemical reaction between the blood sample and the reagent in this setup is observed by a camera over time. Notably, the presented framework can be generalised to any image-based photometric measurement scheme in the context of modern biosensors. In this thesis a framework is developed to measure the glucose concentration from the raw images obtained by the camera. Initially, a pre-processing scheme is presented to enhance the raw images. Moreover, a reaction onset detection algorithm is developed. This eliminates unnecessary computation during the constant phase of the chemical reaction. To detect faulty glucose measurements, methods of texture analysis are identified and employed in ...
Demitri, Nevine — Technische Universität Darmstadt
Distributed Stochastic Optimization in Non-Differentiable and Non-Convex Environments
The first part of this dissertation considers distributed learning problems over networked agents. The general objective of distributed adaptation and learning is the solution of global, stochastic optimization problems through localized interactions and without information about the statistical properties of the data. Regularization is a useful technique to encourage or enforce structural properties on the resulting solution, such as sparsity or constraints. A substantial number of regularizers are inherently non-smooth, while many cost functions are differentiable. We propose distributed and adaptive strategies that are able to minimize aggregate sums of objectives. In doing so, we exploit the structure of the individual objectives as sums of differentiable costs and non-differentiable regularizers. The resulting algorithms are adaptive in nature and able to continuously track drifts in the problem; their recursions, however, are subject to persistent perturbations arising from the stochastic nature of ...
Vlaski, Stefan — University of California, Los Angeles
Robust Network Topology Inference and Processing of Graph Signals
The abundance of large and heterogeneous systems is rendering contemporary data more pervasive, intricate, and with a non-regular structure. With classical techniques facing troubles to deal with the irregular (non-Euclidean) domain where the signals are defined, a popular approach at the heart of graph signal processing (GSP) is to: (i) represent the underlying support via a graph and (ii) exploit the topology of this graph to process the signals at hand. In addition to the irregular structure of the signals, another critical limitation is that the observed data is prone to the presence of perturbations, which, in the context of GSP, may affect not only the observed signals but also the topology of the supporting graph. Ignoring the presence of perturbations, along with the couplings between the errors in the signal and the errors in their support, can drastically hinder ...
Rey, Samuel — King Juan Carlos University
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.