Computational Attention: Modelisation and Application to Audio and Image Processing

Consciously or unconsciously, humans always pay attention to a wide variety of stimuli. Attention is part of daily life and it is the first step to understanding. The proposed thesis deals with a computational approach to the human attentional mechanism and with its possible applications mainly in the field of computer vision. In a first stage, the text introduces a rarity-based three-level attention model handling monodimensional signals as well as images or video sequences. The concept of attention is defined as the transformation of a huge acquired unstructured data set into a smaller structured one while preserving the information: the attentional mechanism turns rough data into intelligence. Afterwards, several applications are described in the fields of machine vision, signal coding and enhancement, medical imaging, event detection and so on. These applications not only show the applicability of the proposed computational ...

Mancas, Matei — Universite de Mons


Vision models and quality metrics for image processing applications

Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...

Winkler, Stefan — Swiss Federal Institute of Technology


Modeling Perceived Quality for Imaging Applications

People of all generations are making more and more use of digital imaging systems in their daily lives. The image content rendered by these digital imaging systems largely differs in perceived quality depending on the system and its applications. To be able to optimize the experience of viewers of this content understanding and modeling perceived image quality is essential. Research on modeling image quality in a full-reference framework --- where the original content can be used as a reference --- is well established in literature. In many current applications, however, the perceived image quality needs to be modeled in a no-reference framework at real-time. As a consequence, the model needs to quantitatively predict perceived quality of a degraded image without being able to compare it to its original version, and has to achieve this with limited computational complexity in order ...

Liu, Hantao — Delft University of Technology


Sound Event Detection by Exploring Audio Sequence Modelling

Everyday sounds in real-world environments are a powerful source of information by which humans can interact with their environments. Humans can infer what is happening around them by listening to everyday sounds. At the same time, it is a challenging task for a computer algorithm in a smart device to automatically recognise, understand, and interpret everyday sounds. Sound event detection (SED) is the process of transcribing an audio recording into sound event tags with onset and offset time values. This involves classification and segmentation of sound events in the given audio recording. SED has numerous applications in everyday life which include security and surveillance, automation, healthcare monitoring, multimedia information retrieval, and assisted living technologies. SED is to everyday sounds what automatic speech recognition (ASR) is to speech and automatic music transcription (AMT) is to music. The fundamental questions in designing ...

[Pankajakshan], [Arjun] — Queen Mary University of London


Audio-visual processing and content management techniques, for the study of (human) bioacoustics phenomena

The present doctoral thesis aims towards the development of new long-term, multi-channel, audio-visual processing techniques for the analysis of bioacoustics phenomena. The effort is focused on the study of the physiology of the gastrointestinal system, aiming at the support of medical research for the discovery of gastrointestinal motility patterns and the diagnosis of functional disorders. The term "processing" in this case is quite broad, incorporating the procedures of signal processing, content description, manipulation and analysis, that are applied to all the recorded bioacoustics signals, the auxiliary audio-visual surveillance information (for the monitoring of experiments and the subjects' status), and the extracted audio-video sequences describing the abdominal sound-field alterations. The thesis outline is as follows. The main objective of the thesis, which is the technological support of medical research, is presented in the first chapter. A quick problem definition is initially ...

Dimoulas, Charalampos — Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece


Deep Learning Techniques for Visual Counting

The explosion of Deep Learning (DL) added a boost to the already rapidly developing field of Computer Vision to such a point that vision-based tasks are now parts of our everyday lives. Applications such as image classification, photo stylization, or face recognition are nowadays pervasive, as evidenced by the advent of modern systems trivially integrated into mobile applications. In this thesis, we investigated and enhanced the visual counting task, which automatically estimates the number of objects in still images or video frames. Recently, due to the growing interest in it, several Convolutional Neural Network (CNN)-based solutions have been suggested by the scientific community. These artificial neural networks, inspired by the organization of the animal visual cortex, provide a way to automatically learn effective representations from raw visual data and can be successfully employed to address typical challenges characterizing this task, ...

Ciampi Luca — University of Pisa


Facial Feature Extraction and Estimation of Gaze Direction in Human-Computer Interaction

In the modern age of information, there is a growing interest in improving interaction between humans and computers in an unremitting attempt to render it as seamless as the interaction between humans. In the core of this endeavor are the study of the human face and the focus of attention, determined by the eye gaze. The main objective of the current thesis is to develop accurate and reliable methods for extracting facial information, localizing the positions of the eye centers and performing tracking of the eye gaze. Usually such systems are grounded upon various assumptions regarding the topology of the features and the camera parameters or require dedicated hardware. In the regard of ubiquitous computing, all the methods developed in the scope of the current thesis use images and videos acquired using standard cameras under natural illumination, without the requirement ...

Skodras, Evangelos — University of Patras


Tracking and Planning for Surveillance Applications

Vision and infrared sensors are very common in surveillance and security applications, and there are numerous examples where a critical infrastructure, e.g. a harbor, an airport, or a military camp, is monitored by video surveillance systems. There is a need for automatic processing of sensor data and intelligent control of the sensor in order to obtain efficient and high performance solutions that can support a human operator. This thesis considers two subparts of the complex sensor fusion system; namely target tracking and sensor control.The multiple target tracking problem using particle filtering is studied. In particular, applications where road constrained targets are tracked with an airborne video or infrared camera are considered. By utilizing the information about the road network map it is possible to enhance the target tracking and prediction performance. A dynamic model suitable for on-road target tracking with ...

Skoglar, Per — Linköping University, Department of Electrical Engineering


Automated Face Recognition from Low-resolution Imagery

Recently, significant advances in the field of automated face recognition have been achieved using computer vision, machine learning, and deep learning methodologies. However, despite claims of super-human performance of face recognition algorithms on select key benchmark tasks, there remain several open problems that preclude the general replacement of human face recognition work with automated systems. State-of-the-art automated face recognition systems based on deep learning methods are able to achieve high accuracy when the face images they are tasked with recognizing subjects from are of sufficiently high quality. However, low image resolution remains one of the principal obstacles to face recognition systems, and their performance in the low-resolution regime is decidedly below human capabilities. In this PhD thesis, we present a systematic study of modern automated face recognition systems in the presence of image degradation in various forms. Based on our ...

Grm, Klemen — University of Ljubljana


A statistical approach to motion estimation

Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...

Moschetti, Fulvio — Swiss Federal Institute of Technology


Vision-based human activities recognition in supervised or assisted environment

Human Activity Recognition HAR has been a hot research topic in the last decade due to its wide range of applications. Indeed, it has been the basis for implementa- tion of many computer vision applications, home security, video surveillance, and human-computer interaction. We intend by HAR, tools, and systems allowing to detect and recognize actions performed by individuals. With the considerable progress made in sensing technologies, HAR systems shifted from wearable and ambient-based to vision-based. This motivated the researchers to propose a large mass of vision-based solutions. From another perspective, HAR plays an impor- tant role in the health care sector and gets involved in the construction of fall detection systems and many smart home-related systems. Fall detection FD con- sists in identifying the occurrence of falls among other daily life activities. This is essential because falling is one of ...

Beddiar Djamila Romaissa — Université De Larbi Ben M’hidi Oum EL Bouaghi, Algeria


An Attention Model and its Application in Man-Made Scene Interpretation

The ultimate aim of research into computer vision is designing a system which interprets its surrounding environment in a similar way the human can do effortlessly. However, the state of technology is far from achieving such a goal. In this thesis different components of a computer vision system that are designed for the task of interpreting man-made scenes, in particular images of buildings, are described. The flow of information in the proposed system is bottom-up i.e., the image is first segmented into its meaningful components and subsequently the regions are labelled using a contextual classifier. Starting from simple observations concerning the human vision system and the gestalt laws of human perception, like the law of 'good (simple) shape' and 'perceptual grouping', a blob detector is developed, that identifies components in a 2D image. These components are convex regions of interest, ...

Jahangiri, Mohammad — Imperial College London


Video Processing for Remote Respiration Monitoring

Monitoring of vital signs is a key tool in medical diagnostics to asses the onset and the evolution of several diseases. Among fundamental vital parameters, such as the hearth rate, blood pressure and body temperature, the Respiratory Rate (RR) plays an important role. For this reason, respiration needs to be carefully monitored in order to detect potential signs or events indicating possible changes of health conditions. Monitoring of the respiration is generally carried out in hospital and clinical environments by the use of expensive devices with several sensors connected to the patient's body. A new research trend, in order to reduce healthcare service costs and make monitoring of vital signs more comfortable, is the development of low-cost systems which may allow remote and contactless monitoring; in such a context, an appealing method is to rely on video processing-based solutions. In ...

Alinovi, Davide — University of Parma


Fire Detection Algorithms Using Multimodal Signal and Image Analysis

Dynamic textures are common in natural scenes. Examples of dynamic textures in video include fire, smoke, clouds, volatile organic compound (VOC) plumes in infra-red (IR) videos, trees in the wind, sea and ocean waves, etc. Researchers extensively studied 2-D textures and related problems in the fields of image processing and computer vision. On the other hand, there is very little research on dynamic texture detection in video. In this dissertation, signal and image processing methods developed for detection of a specific set of dynamic textures are presented. Signal and image processing methods are developed for the detection of flames and smoke in open and large spaces with a range of up to $30$m to the camera in visible-range (IR) video. Smoke is semi-transparent at the early stages of fire. Edges present in image frames with smoke start loosing their sharpness ...

Toreyin, Behcet Ugur — Bilkent University


Tradeoffs and limitations in statistically based image reconstruction problems

Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...

Kragh, Tom — University of Michigan

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.