Adaptive Edge-Enhanced Correlation Based Robust and Real-Time Visual Tracking Framework and Its Deployment in Machine Vision Systems

An adaptive edge-enhanced correlation based robust and real-time visual tracking framework, and two machine vision systems based on the framework are proposed. The visual tracking algorithm can track any object of interest in a video acquired from a stationary or moving camera. It can handle the real-world problems, such as noise, clutter, occlusion, uneven illumination, varying appearance, orientation, scale, and velocity of the maneuvering object, and object fading and obscuration in low contrast video at various zoom levels. The proposed machine vision systems are an active camera tracking system and a vision based system for a UGV (unmanned ground vehicle) to handle a road intersection. The core of the proposed visual tracking framework is an Edge Enhanced Back-propagation neural-network Controlled Fast Normalized Correlation (EE-BCFNC), which makes the object localization stage efficient and robust to noise, object fading, obscuration, and uneven ...

Ahmed, Javed — Electrical (Telecom.) Engineering Department, National University of Sciences and Technology, Rawalpindi, Pakistan.


Gait Analysis in Unconstrained Environments

Gait can be defined as the individuals’ manner of walking. Its analysis can provide significant information about their identity and health, opening a wide range of possibilities in the field of biometric recognition and medical diagnosis. In the field of biometric, the use of gait to perform recognition can provide advantages, such as acquisition from a distance and without the cooperation of the individual being observed. In the field of medicine, gait analysis can be used to detect or assess the development of different gait related pathologies. It can also be used to assess neurological or systemic disorders as their effects are reflected in the individuals’ gait. This Thesis focuses on performing gait analysis in unconstrained environments, using a single 2D camera. This can be a challenging task due to the lack of depth information and self-occlusions in a 2D ...

Tanmay Tulsidas Verlekar — UNIVERSIDADE DE LISBOA, INSTITUTO SUPERIOR TÉCNICO


Voice biometric system security: Design and analysis of countermeasures for replay attacks

Voice biometric systems use automatic speaker verification (ASV) technology for user authentication. Even if it is among the most convenient means of biometric authentication, the robustness and security of ASV in the face of spoofing attacks (or presentation attacks) is of growing concern and is now well acknowledged by the research community. A spoofing attack involves illegitimate access to personal data of a targeted user. Replay is among the simplest attacks to mount - yet difficult to detect reliably and is the focus of this thesis. This research focuses on the analysis and design of existing and novel countermeasures for replay attack detection in ASV, organised in two major parts. The first part of the thesis investigates existing methods for spoofing detection from several perspectives. I first study the generalisability of hand-crafted features for replay detection that show promising results ...

Bhusan Chettri — Queen Mary University of London


Deep learning for semantic description of visual human traits

The recent progress in artificial neural networks (rebranded as “deep learning”) has significantly boosted the state-of-the-art in numerous domains of computer vision offering an opportunity to approach the problems which were hardly solvable with conventional machine learning. Thus, in the frame of this PhD study, we explore how deep learning techniques can help in the analysis of one the most basic and essential semantic traits revealed by a human face, namely, gender and age. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes. Convolutional Neural Network (CNN) has currently become a standard model for image-based object recognition in general, and therefore, is a natural choice for addressing the first of these two problems. However, our preliminary studies have shown that the ...

Antipov, Grigory — Télécom ParisTech (Eurecom)


Camera based motion estimation and recognition for human-computer interaction

Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate ...

Hannuksela, Jari — University of Oulou


Acoustic Event Detection: Feature, Evaluation and Dataset Design

It takes more time to think of a silent scene, action or event than finding one that emanates sound. Not only speaking or playing music but almost everything that happens is accompanied with or results in one or more sounds mixed together. This makes acoustic event detection (AED) one of the most researched topics in audio signal processing nowadays and it will probably not see a decline anywhere in the near future. This is due to the thirst for understanding and digitally abstracting more and more events in life via the enormous amount of recorded audio through thousands of applications in our daily routine. But it is also a result of two intrinsic properties of audio: it doesn’t need a direct sight to be perceived and is less intrusive to record when compared to image or video. Many applications such ...

Mina Mounir — KU Leuven, ESAT STADIUS


Audio Visual Speech Enhancement

This thesis presents a novel approach to speech enhancement by exploiting the bimodality of speech production and the correlation that exists between audio and visual speech information. An analysis into the correlation of a range of audio and visual features reveals significant correlation to exist between visual speech features and audio filterbank features. The amount of correlation was also found to be greater when the correlation is analysed with individual phonemes rather than across all phonemes. This led to building a Gaussian Mixture Model (GMM) that is capable of estimating filterbank features from visual features. Phoneme-specific GMMs gave lower filterbank estimation errors and a phoneme transcription is decoded using audio-visual Hidden Markov Model (HMM). Clean filterbank estimates along with mean noise estimates were then utilised to construct visually-derived Wiener filters that are able to enhance noisy speech. The mean noise ...

Almajai, Ibrahim — University of East Anglia


Real Time Stereo to Multi-view Video Conversion

A novel and efficient methodology is presented for the conversion of stereo to multi-view video in order to address the 3D content requirements for the next generation 3D-TVs and auto-stereoscopic multi-view displays. There are two main algorithmic blocks in such a conversion system; stereo matching and virtual view rendering that enable extraction of 3D information from stereo video and synthesis of inexistent virtual views, respectively. In the intermediate steps of these functional blocks, a novel edge-preserving filter is proposed that recursively constructs connected support regions for each pixel among color-wise similar neighboring pixels. The proposed recursive update structure eliminates pre-defined window dependency of the conventional approaches, providing complete content adaptibility with quite low computational complexity. Based on extensive tests, it is observed that the proposed filtering technique yields better or competetive results against some leading techniques in the literature. The ...

Cigla, Cevahir — Middle East Technical University


An Attention Model and its Application in Man-Made Scene Interpretation

The ultimate aim of research into computer vision is designing a system which interprets its surrounding environment in a similar way the human can do effortlessly. However, the state of technology is far from achieving such a goal. In this thesis different components of a computer vision system that are designed for the task of interpreting man-made scenes, in particular images of buildings, are described. The flow of information in the proposed system is bottom-up i.e., the image is first segmented into its meaningful components and subsequently the regions are labelled using a contextual classifier. Starting from simple observations concerning the human vision system and the gestalt laws of human perception, like the law of 'good (simple) shape' and 'perceptual grouping', a blob detector is developed, that identifies components in a 2D image. These components are convex regions of interest, ...

Jahangiri, Mohammad — Imperial College London


Learning Transferable Knowledge through Embedding Spaces

The unprecedented processing demand, posed by the explosion of big data, challenges researchers to design efficient and adaptive machine learning algorithms that do not require persistent retraining and avoid learning redundant information. Inspired from learning techniques of intelligent biological agents, identifying transferable knowledge across learning problems has been a significant research focus to improve machine learning algorithms. In this thesis, we address the challenges of knowledge transfer through embedding spaces that capture and store hierarchical knowledge. In the first part of the thesis, we focus on the problem of cross-domain knowledge transfer. We first address zero-shot image classification, where the goal is to identify images from unseen classes using semantic descriptions of these classes. We train two coupled dictionaries which align visual and semantic domains via an intermediate embedding space. We then extend this idea by training deep networks that ...

Mohammad Rostami — University of Pennsylvania


Biological Image Analysis

In biological research images are extensively used to monitor growth, dynamics and changes in biological specimen, such as cells or plants. Many of these images are used solely for observation or are manually annotated by an expert. In this dissertation we discuss several methods to automate the annotating and analysis of bio-images. Two large clusters of methods have been investigated and developed. A first set of methods focuses on the automatic delineation of relevant objects in bio-images, such as individual cells in microscopic images. Since these methods should be useful for many different applications, e.g. to detect and delineate different objects (cells, plants, leafs, ...) in different types of images (different types of microscopes, regular colour photographs, ...), the methods should be easy to adjust. Therefore we developed a methodology relying on probability theory, where all required parameters can easily ...

De Vylder, Jonas — Ghent University


Denoising and Features Extraction of ECG Signals using Unbiased FIR Estimation Techniques

The electrocardiogram (ECG) signals bear fundamental information for medical experts to make decisions about heart diseases. Therefore, in the past decades the scientific community has made great efforts to develop methods for the heartbeat features extraction via ECG records with the highest accuracy and efficiency using different strategies. It should be noted that noise and artifacts induced by external factors make it difficult to learn specific patterns of ECG signals, which play an important role to find abnormalities. Using filtering techniques such as the unbiased finite impulse response FIR (UFIR) filtering approach promises better results. Aimed at extracting the features with the highest accuracy, in this dissertation, we have designed and applied to ECG signals the adaptive UFIR filter and smoother. We also compared the proposed technique with the traditional method such as UFIR predictors, standard filters (e.g. low-pass filter), ...

Lastre Dominguez Carlos Mauricio — Universidad de Guanajuato


Sensor Fusion for Automotive Applications

Mapping stationary objects and tracking moving targets are essential for many autonomous functions in vehicles. In order to compute the map and track estimates, sensor measurements from radar, laser and camera are used together with the standard proprioceptive sensors present in a car. By fusing information from different types of sensors, the accuracy and robustness of the estimates can be increased. Different types of maps are discussed and compared in the thesis. In particular, road maps make use of the fact that roads are highly structured, which allows relatively simple and powerful models to be employed. It is shown how the information of the lane markings, obtained by a front looking camera, can be fused with inertial measurement of the vehicle motion and radar measurements of vehicles ahead to compute a more accurate and robust road geometry estimate. Further, it ...

Lundquist, Christian — Linköping University


Motion Estimation and Compensation of Video Sequences using Affine Transforms

Motion estimation and compensation is of great importance for the compression of video sequences. In this dissertation a motion estimation/compensation approach based on a non-overlapping connected mesh of triangles is proposed. To manipulate the triangles within the connected mesh or ‘rubber sheet’ structure affin transforms are used which allow many different types of motion to be accurately modelled. Another advantage of this structure is that the non-overlapping triangles do not generate the typical artefacts associated with the current block based standards when operating at very low bitrates. The initial motion estimation/ compensation algorithms investigated implement a full search method which updates one vertex at a time matching sets of triangles between adjacent frames. Although the prediction performance is good the resulting computational load is high. This issue is addressed by deriving gradient-based algorithms which are found to be between one ...

Bradshaw, David Benedict — University of Cambridge


Automated Face Recognition from Low-resolution Imagery

Recently, significant advances in the field of automated face recognition have been achieved using computer vision, machine learning, and deep learning methodologies. However, despite claims of super-human performance of face recognition algorithms on select key benchmark tasks, there remain several open problems that preclude the general replacement of human face recognition work with automated systems. State-of-the-art automated face recognition systems based on deep learning methods are able to achieve high accuracy when the face images they are tasked with recognizing subjects from are of sufficiently high quality. However, low image resolution remains one of the principal obstacles to face recognition systems, and their performance in the low-resolution regime is decidedly below human capabilities. In this PhD thesis, we present a systematic study of modern automated face recognition systems in the presence of image degradation in various forms. Based on our ...

Grm, Klemen — University of Ljubljana

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.