Video Based Detection of Driver Fatigue (2009)
Non-rigid Registration-based Data-driven 3D Facial Action Unit Detection
Automated analysis of facial expressions has been an active area of study due to its potential applications not only for intelligent human-computer interfaces but also for human facial behavior research. To advance automatic expression analysis, this thesis proposes and empirically proves two hypotheses: (i) 3D face data is a better data modality than conventional 2D camera images, not only for being much less disturbed by illumination and head pose effects but also for capturing true facial surface information. (ii) It is possible to perform detailed face registration without resorting to any face modeling. This means that data-driven methods in automatic expression analysis can compensate for the confounding effects like pose and physiognomy differences, and can process facial features more effectively, without suffering the drawbacks of model-driven analysis. Our study is based upon Facial Action Coding System (FACS) as this paradigm ...
Savran, Arman — Bogazici University
Video person recognition strategies using head motion and facial appearance
In this doctoral dissertation, we principally explore the use of the temporal information available in video sequences for person and gender recognition; in particular, we focus on the analysis of head and facial motion, and their potential application as biometric identifiers. We also investigate how to exploit as much video information as possible for the automatic recognition; more precisely, we examine the possibility of integrating the head and mouth motion information with facial appearance into a multimodal biometric system, and we study the extraction of novel spatio-temporal facial features for recognition. We initially present a person recognition system that exploits the unconstrained head motion information, extracted by tracking a few facial landmarks in the image plane. In particular, we detail how each video sequence is firstly pre-processed by semiautomatically detecting the face, and then automatically tracking the facial landmarks over ...
Matta, Federico — Eurécom / Multimedia communications
Automatic Analysis of Head and Facial Gestures in Video Streams
Automatic analysis of head gestures and facial expressions is a challenging research area and it has significant applications for intelligent human-computer interfaces. An important task is the automatic classification of non-verbal messages composed of facial signals where both facial expressions and head rotations are observed. This is a challenging task, because there is no definite grammar or code-book for mapping the non-verbal facial signals into a corresponding mental state. Furthermore, non-verbal facial signals and the observed emotions have dependency on personality, society, state of the mood and also the context in which they are displayed or observed. This thesis mainly addresses the three desired tasks for an effective visual information based automatic face and head gesture (FHG) analyzer. First we develop a fully automatic, robust and accurate 17-point facial landmark localizer based on local appearance information and structural information of ...
Cinar Akakin, Hatice — Bogazici University
Contributions to Human Motion Modeling and Recognition using Non-intrusive Wearable Sensors
This thesis contributes to motion characterization through inertial and physiological signals captured by wearable devices and analyzed using signal processing and deep learning techniques. This research leverages the possibilities of motion analysis for three main applications: to know what physical activity a person is performing (Human Activity Recognition), to identify who is performing that motion (user identification) or know how the movement is being performed (motor anomaly detection). Most previous research has addressed human motion modeling using invasive sensors in contact with the user or intrusive sensors that modify the user’s behavior while performing an action (cameras or microphones). In this sense, wearable devices such as smartphones and smartwatches can collect motion signals from users during their daily lives in a less invasive or intrusive way. Recently, there has been an exponential increase in research focused on inertial-signal processing to ...
Gil-Martín, Manuel — Universidad Politécnica de Madrid
This thesis focuses on wearables for health status monitoring, covering applications aimed at emergency solutions to the COVID-19 pandemic and aging society. The methods of ambient assisted living (AAL) are presented for the neurodegenerative disease Parkinson’s disease (PD), facilitating ’aging in place’ thanks to machine learning and around wearables - solutions of mHealth. Furthermore, the approaches using machine learning and wearables are discussed for early-stage COVID-19 detection, with encouraging accuracy. Firstly, a publicly available dataset containing COVID-19, influenza, and healthy control data was reused for research purposes. The solution presented in this thesis is considering the classification problem and outperformed the state-of-the-art methods, whereas the original paper introduced just anomaly detection and not shown the specificity of the created models. The proposed model in the thesis for early detection of COVID-19 achieved 78 % for the k-NN classifier. Moreover, a ...
Justyna Skibińska — Brno University of Technology & Tampere University
Emotion assessment for affective computing based on brain and peripheral signals
Current Human-Machine Interfaces (HMI) lack of “emotional intelligence”, i.e. they are not able to identify human emotional states and take this information into account to decide on the proper actions to execute. The goal of affective computing is to fill this lack by detecting emotional cues occurring during Human-Computer Interaction (HCI) and synthesizing emotional responses. In the last decades, most of the studies on emotion assessment have focused on the analysis of facial expressions and speech to determine the emotional state of a person. Physiological activity also includes emotional information that can be used for emotion assessment but has received less attention despite of its advantages (for instance it can be less easily faked than facial expressions). This thesis reports on the use of two types of physiological activities to assess emotions in the context of affective computing: the activity ...
Chanel, Guillaume — University of Geneva
Detection of epileptic seizures based on video and accelerometer recordings
Epilepsy is one of the most common neurological diseases, especially in children. And although the majority of patients can be treated through medication or surgery (70%-75%), a significant group of patients cannot be treated. For this latter group of patients it is advisable to follow the evolution of the disease. This can be done through a long-term automatic monitoring, which gives an objective measure of the number of seizures that the patient has, for example during the night. On the other hand, there is a reduced social control overnight and the parents or caregivers can miss some seizures. In severe seizures, it is sometimes necessary, however, to avoid dangerous situations during or after the seizure (e.g. the danger of suffocation caused by vomiting or a position that obstructs breathing, or the risk of injury during violent movements), and to comfort ...
Cuppens, Kris — Katholieke Universiteit Leuven
Visual Analysis of Faces with Application in Biometrics, Forensics and Health Informatics
Computer vision-based analysis of human facial video provides information regarding to expression, diseases symptoms, and physiological parameters such as heartbeat rate, blood pressure and respiratory rate. It also provides a convenient source of heartbeat signal to be used in biometrics and forensics. This thesis is a collection of works done in five themes in the realm of computer vision-based facial image analysis: Monitoring elderly patients at private homes, Face quality assessment, Measurement of physiological parameters, Contact-free heartbeat biometrics, and Decision support system for healthcare. The work related to monitoring elderly patients at private homes includes a detailed survey and review of the monitoring technologies relevant to older patients living at home by discussing previous reviews and relevant taxonomies, different scenarios for home monitoring solutions for older patients, sensing and data acquisition techniques, data processing and analysis techniques, available datasets for ...
Haque, Mohammad Ahsanul — Aalborg Univeristy
Monitoring Infants by Automatic Video Processing
This work has, as its objective, the development of non-invasive and low-cost systems for monitoring and automatic diagnosing specific neonatal diseases by means of the analysis of suitable video signals. We focus on monitoring infants potentially at risk of diseases characterized by the presence or absence of rhythmic movements of one or more body parts. Seizures and respiratory diseases are specifically considered, but the approach is general. Seizures are defined as sudden neurological and behavioural alterations. They are age-dependent phenomena and the most common sign of central nervous system dysfunction. Neonatal seizures have onset within the 28th day of life in newborns at term and within the 44th week of conceptional age in preterm infants. Their main causes are hypoxic-ischaemic encephalopathy, intracranial haemorrhage, and sepsis. Studies indicate an incidence rate of neonatal seizures of 2‰ live births, 11‰ for preterm ...
Cattani Luca — University of Parma (Italy)
The analysis of audiovisual data aims at extracting high level information, equivalent with the one(s) that can be extracted by a human. It is considered as a fundamental, unsolved (in its general form) problem. Even though the inverse problem, the audiovisual (sound and animation) synthesis, is judged easier than the previous, it remains an unsolved problem. The systematic research on these problems yields solutions that constitute the basis for a great number of continuously developing applications. In this thesis, we examine the two aforementioned fundamental problems. We propose algorithms and models of analysis and synthesis of articulated motion and undulatory (snake) locomotion, using data from video sequences. The goal of this research is the multilevel information extraction from video, like object tracking and activity recognition, and the 3-D animation synthesis in virtual environments based on the results of analysis. An ...
Panagiotakis, Costas — University of Crete
Acoustic Event Detection: Feature, Evaluation and Dataset Design
It takes more time to think of a silent scene, action or event than finding one that emanates sound. Not only speaking or playing music but almost everything that happens is accompanied with or results in one or more sounds mixed together. This makes acoustic event detection (AED) one of the most researched topics in audio signal processing nowadays and it will probably not see a decline anywhere in the near future. This is due to the thirst for understanding and digitally abstracting more and more events in life via the enormous amount of recorded audio through thousands of applications in our daily routine. But it is also a result of two intrinsic properties of audio: it doesn’t need a direct sight to be perceived and is less intrusive to record when compared to image or video. Many applications such ...
Mina Mounir — KU Leuven, ESAT STADIUS
Structured and Sequential Representations For Human Action Recognition
Human action recognition problem is one of the most challenging problems in the computer vision domain, and plays an emerging role in various fields of study. In this thesis, we investigate structured and sequential representations of spatio-temporal data for recognizing human actions and for measuring action performance quality. In video sequences, we characterize each action with a graphical structure of its spatio-temporal interest points and each such interest point is qualified by its cuboid descriptors. In the case of depth data, an action is represented by the sequence of skeleton joints. Given such descriptors, we solve the human action recognition problem through a hyper-graph matching formulation. As is known, hyper-graph matching problem is NP-complete. We simplify the problem in two stages to enable a fast solution: In the first stage, we take into consideration the physical constraints such as time ...
Celiktutan, Oya — Bogazici University
Understanding and Assessing Quality of Experience in Immersive Communications
eXtended Reality (XR) technology, also called Mixed Reality (MR), is in constant development and improvement in terms of hardware and software to offer relevant experiences to users. One of the advances in XR has been the introduction of real visual information in the virtual environment, offering a more natural interaction with the scene and a greater acceptance of technology. Another advance has been achieved with the representation of the scene through a video that covers the entire environment, called 360-degree or omnidirectional video. These videos are acquired by cameras with omnidirectional lenses that cover the 360-degrees of the scene and are generally viewed by users through a head-tracked Head Mounted Display (HMD). Users only visualize a subset of the 360-degree scene, called viewport, which changes with the variations of the viewing direction of the users, determined by the movements of ...
Orduna, Marta — Universidad Politécnica de Madrid
Epilepsy is one of the most common neurological diseases that manifests in repetitive epileptic seizures as a result of an abnormal, synchronous activity of a large group of neurons. Depending on the affected brain regions, seizures produce various severe clinical symptoms. There is no cure for epilepsy and sometimes even medication and other therapies, like surgery, vagus nerve stimulation or ketogenic diet, do not control the number of seizures. In that case, long-term (home) monitoring and automatic seizure detection would enable the tracking of the evolution of the disease and improve objective insight in any responses to medical interventions or changes in medical treatment. Especially during the night, supervision is reduced; hence a large number of seizures is missed. In addition, an alarm should be integrated into the automated seizure detection algorithm for severe seizures in order to help the ...
Milošević, Milica — KU Leuven
Camera based motion estimation and recognition for human-computer interaction
Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate ...
Hannuksela, Jari — University of Oulou
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.