Selected Topics in Inertial and Visual Sensor Fusion: Calibration, Observability Analysis and Applications (2014)
Sensor Fusion and Calibration using Inertial Sensors, Vision, Ultra-Wideband and GPS
The usage of inertial sensors has traditionally been confined primarily to the aviation and marine industry due to their associated cost and bulkiness. During the last decade, however, inertial sensors have undergone a rather dramatic reduction in both size and cost with the introduction of MEMS technology. As a result of this trend, inertial sensors have become commonplace for many applications and can even be found in many consumer products, for instance smart phones, cameras and game consoles. Due to the drift inherent in inertial technology, inertial sensors are typically used in combination with aiding sensors to stabilize andimprove the estimates. The need for aiding sensors becomes even more apparent due to the reduced accuracy of MEMS inertial sensors. This thesis discusses two problems related to using inertial sensors in combination with aiding sensors. The first is the problem of ...
Hol, Jeroen — Linköping University
Multi-Sensor Integration for Indoor 3D Reconstruction
Outdoor maps and navigation information delivered by modern services and technologies like Google Maps and Garmin navigators have revolutionized the lifestyle of many people. Motivated by the desire for similar navigation systems for indoor usage from consumers, advertisers, emergency rescuers/responders, etc., many indoor environments such as shopping malls, museums, casinos, airports, transit stations, offices, and schools need to be mapped. Typically, the environment is first reconstructed by capturing many point clouds from various stations and defining their spatial relationships. Currently, there is a lack of an accurate, rigorous, and speedy method for relating point clouds in indoor, urban, satellite-denied environments. This thesis presents a novel and automatic way for fusing calibrated point clouds obtained using a terrestrial laser scanner and the Microsoft Kinect by integrating them with a low-cost inertial measurement unit. The developed system, titled the Scannect, is the ...
Chow, Jacky — University of Calgary
Probabilistic modeling for sensor fusion with inertial measurements
In recent years, inertial sensors have undergone major developments. The quality of their measurements has improved while their cost has decreased, leading to an increase in availability. They can be found in stand-alone sensor units, so-called inertial measurement units, but are nowadays also present in for instance any modern smartphone, in Wii controllers and in virtual reality headsets. The term inertial sensor refers to the combination of accelerometers and gyroscopes. These measure the external specific force and the angular velocity, respectively. Integration of their measurements provides information about the sensor’s position and orientation. However, the position and orientation estimates obtained by simple integration suffer from drift and are therefore only accurate on a short time scale. In order to improve these estimates, we combine the inertial sensors with additional sensors and models. To combine these different sources of information, also ...
Kok, Manon — Linköping University
Gait Analysis in Unconstrained Environments
Gait can be defined as the individuals’ manner of walking. Its analysis can provide significant information about their identity and health, opening a wide range of possibilities in the field of biometric recognition and medical diagnosis. In the field of biometric, the use of gait to perform recognition can provide advantages, such as acquisition from a distance and without the cooperation of the individual being observed. In the field of medicine, gait analysis can be used to detect or assess the development of different gait related pathologies. It can also be used to assess neurological or systemic disorders as their effects are reflected in the individuals’ gait. This Thesis focuses on performing gait analysis in unconstrained environments, using a single 2D camera. This can be a challenging task due to the lack of depth information and self-occlusions in a 2D ...
Tanmay Tulsidas Verlekar — UNIVERSIDADE DE LISBOA, INSTITUTO SUPERIOR TÉCNICO
Acoustic sensor network geometry calibration and applications
In the modern world, we are increasingly surrounded by computation devices with communication links and one or more microphones. Such devices are, for example, smartphones, tablets, laptops or hearing aids. These devices can work together as nodes in an acoustic sensor network (ASN). Such networks are a growing platform that opens the possibility for many practical applications. ASN based speech enhancement, source localization, and event detection can be applied for teleconferencing, camera control, automation, or assisted living. For this kind of applications, the awareness of auditory objects and their spatial positioning are key properties. In order to provide these two kinds of information, novel methods have been developed in this thesis. Information on the type of auditory objects is provided by a novel real-time sound classification method. Information on the position of human speakers is provided by a novel localization ...
Plinge, Axel — TU Dortmund University
Estimation for Sensor Fusion and Sparse Signal Processing
Progressive developments in computing and sensor technologies during the past decades have enabled the formulation of increasingly advanced problems in statistical inference and signal processing. The thesis is concerned with statistical estimation methods, and is divided into three parts with focus on two different areas: sensor fusion and sparse signal processing. The first part introduces the well-established Bayesian, Fisherian and least-squares estimation frameworks, and derives new estimators. Specifically, the Bayesian framework is applied in two different classes of estimation problems: scenarios in which (i) the signal covariances themselves are subject to uncertainties, and (ii) distance bounds are used as side information. Applications include localization, tracking and channel estimation. The second part is concerned with the extraction of useful information from multiple sensors by exploiting their joint properties. Two sensor configurations are considered here: (i) a monocular camera and an inertial ...
Zachariah, Dave — KTH Royal Institute of Technology
Contributions to Human Motion Modeling and Recognition using Non-intrusive Wearable Sensors
This thesis contributes to motion characterization through inertial and physiological signals captured by wearable devices and analyzed using signal processing and deep learning techniques. This research leverages the possibilities of motion analysis for three main applications: to know what physical activity a person is performing (Human Activity Recognition), to identify who is performing that motion (user identification) or know how the movement is being performed (motor anomaly detection). Most previous research has addressed human motion modeling using invasive sensors in contact with the user or intrusive sensors that modify the user’s behavior while performing an action (cameras or microphones). In this sense, wearable devices such as smartphones and smartwatches can collect motion signals from users during their daily lives in a less invasive or intrusive way. Recently, there has been an exponential increase in research focused on inertial-signal processing to ...
Gil-Martín, Manuel — Universidad Politécnica de Madrid
Large-Scale Light Field Capture and Reconstruction
This thesis discusses approaches and techniques to convert Sparsely-Sampled Light Fields (SSLFs) into Densely-Sampled Light Fields (DSLFs), which can be used for visualization on 3DTV and Virtual Reality (VR) devices. Exemplarily, a movable 1D large-scale light field acquisition system for capturing SSLFs in real-world environments is evaluated. This system consists of 24 sparsely placed RGB cameras and two Kinect V2 sensors. The real-world SSLF data captured with this setup can be leveraged to reconstruct real-world DSLFs. To this end, three challenging problems require to be solved for this system: (i) how to estimate the rigid transformation from the coordinate system of a Kinect V2 to the coordinate system of an RGB camera; (ii) how to register the two Kinect V2 sensors with a large displacement; (iii) how to reconstruct a DSLF from a SSLF with moderate and large disparity ranges. ...
Gao, Yuan — Department of Computer Science, Kiel University
Biosignal processing and activity modeling for multimodal human activity recognition
This dissertation's primary goal was to systematically study human activity recognition and enhance its performance by advancing human activities' sequential modeling based on HMM-based machine learning. Driven by these purposes, this dissertation has the following major contributions: The proposal of our HAR research pipeline that guides the building of a robust wearable end-to-end HAR system and the implementation of the recording and recognition software Activity Signal Kit (ASK) according to the pipeline; Collecting several datasets of multimodal biosignals from over 25 subjects using the self-implemented ASK software and implementing an easy mechanism to segment and annotate the data; The comprehensive research on the offline HAR system based on the recorded datasets and the implementation of an end-to-end real-time HAR system; A novel activity modeling method for HAR, which partitions the human activity into a sequence of shared, meaningful, and activity ...
Liu, Hui — University of Bremen
Camera based motion estimation and recognition for human-computer interaction
Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate ...
Hannuksela, Jari — University of Oulou
Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data
The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...
Maggioni, Matteo — Tampere University of Technology
Distributed Video Coding for Wireless Lightweight Multimedia Applications
In the modern wireless age, lightweight multimedia technology stimulates attractive commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video compression systems meeting such key requirements as very low encoding complexity, transmission error robustness and scalability, is no straightforward task. The answer can be found in fundamental information theoretic results, according to which efficient compression can be achieved by leveraging knowledge of the source statistics at the decoder only, giving rise to distributed, or alias Wyner-Ziv, video coding. This dissertation engineers efficient lightweight Wyner-Ziv video coding schemes emphasizing on several design aspects and applications. The first contribution of this dissertation focuses on the design of effective side information generation techniques so as to boost the compression capabilities of Wyner-Ziv video coding systems. To this end, overlapped block motion estimation ...
Deligiannis, Nikos — Vrije Universiteit Brussel
This study proposes a new spectral representation called the Zeros of Z-Transform (ZZT), which is an all-zero representation of the z-transform of the signal. In addition, new chirp group delay processing techniques are developed for analysis of resonances of a signal. The combination of the ZZT representation with the chirp group delay processing algorithms provides a useful domain to study resonance characteristics of source and filter components of speech. Using the two representations, effective algorithms are developed for: source-tract decomposition of speech, glottal flow parameter estimation, formant tracking and feature extraction for speech recognition. The ZZT representation is mainly important for theoretical studies. Studying the ZZT of a signal is essential to be able to develop effective chirp group delay processing methods. Therefore, first the ZZT representation of the source-filter model of speech is studied for providing a theoretical background. ...
Bozkurt, Baris — Universite de Mons
Video person recognition strategies using head motion and facial appearance
In this doctoral dissertation, we principally explore the use of the temporal information available in video sequences for person and gender recognition; in particular, we focus on the analysis of head and facial motion, and their potential application as biometric identifiers. We also investigate how to exploit as much video information as possible for the automatic recognition; more precisely, we examine the possibility of integrating the head and mouth motion information with facial appearance into a multimodal biometric system, and we study the extraction of novel spatio-temporal facial features for recognition. We initially present a person recognition system that exploits the unconstrained head motion information, extracted by tracking a few facial landmarks in the image plane. In particular, we detail how each video sequence is firstly pre-processed by semiautomatically detecting the face, and then automatically tracking the facial landmarks over ...
Matta, Federico — Eurécom / Multimedia communications
Planar 3D Scene Representations for Depth Compression
The recent invasion of stereoscopic 3D television technologies is expected to be followed by autostereoscopic and holographic technologies. Glasses-free multiple stereoscopic pair displaying capabilities of these technologies will advance the 3D experience. The prospective 3D format to create the multiple views for such displays is Multiview Video plus Depth (MVD) format based on the Depth Image Based Rendering (DIBR) techniques. The depth modality of the MVD format is an active research area whose main objective is to develop DIBR friendly efficient compression methods. As a part this research, the thesis proposes novel 3D planar-based depth representations. The planar approximation of the stereo depth images is formulated as an energy-based co-segmentation problem by a Markov Random Field model. The energy terms of this problem are designed to mimic the rate-distortion tradeoff for a depth compression application. A heuristic algorithm is developed ...
Özkalaycı, Burak Oğuz — Middle East Technical University
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.