Cognitive-driven speech enhancement using EEG-based auditory attention decoding for hearing aid applications

Identifying the target speaker in hearing aid applications is an essential ingredient to improve speech intelligibility. Although several speech enhancement algorithms are available to reduce background noise or to perform source separation in multi-speaker scenarios, their performance depends on correctly identifying the target speaker to be enhanced. Recent advances in electroencephalography (EEG) have shown that it is possible to identify the target speaker which the listener is attending to using single-trial EEG-based auditory attention decoding (AAD) methods. However, in realistic acoustic environments the AAD performance is influenced by undesired disturbances such as interfering speakers, noise and reverberation. In addition, it is important for real-world hearing aid applications to close the AAD loop by presenting on-line auditory feedback. This thesis deals with the problem of identifying and enhancing the target speaker in realistic acoustic environments based on decoding the auditory attention ...

Aroudi, Ali — University of Oldenburg, Germany


Miniaturization effects and node placement for neural decoding in EEG sensor networks

Electroencephalography (EEG) is a non-invasive neurorecording technique, which has the potential to be used for 24/7 neuromonitoring in daily life, e.g., in the context of neural prostheses, brain-computer interfaces, or for improved diagnosis of brain disorders. Although existing mobile wireless EEG headsets are a useful tool for short-term experiments, they are still too heavy, bulky and obtrusive, for long-term EEG-monitoring in daily life. However, we are now witnessing a wave of new miniature EEG sensor devices containing small electrodes embedded in them, which we refer to as Mini-EEGs. Mini-EEGs ideally consist of a wireless node with a small scalp area footprint, in which the electrodes, amplifier and wireless radio are embedded. However, due to their miniaturization, these mini-EEGs have the drawback that only a few EEG channels can be recorded within a small area. The latter also implies that the ...

Mundanad Narayanan, Abhijith — KU Leuven


Acoustic sensor network geometry calibration and applications

In the modern world, we are increasingly surrounded by computation devices with communication links and one or more microphones. Such devices are, for example, smartphones, tablets, laptops or hearing aids. These devices can work together as nodes in an acoustic sensor network (ASN). Such networks are a growing platform that opens the possibility for many practical applications. ASN based speech enhancement, source localization, and event detection can be applied for teleconferencing, camera control, automation, or assisted living. For this kind of applications, the awareness of auditory objects and their spatial positioning are key properties. In order to provide these two kinds of information, novel methods have been developed in this thesis. Information on the type of auditory objects is provided by a novel real-time sound classification method. Information on the position of human speakers is provided by a novel localization ...

Plinge, Axel — TU Dortmund University


Blind Signal Separation

The separation of independent sources from mixed observed data is a fundamental and challenging signal processing problem. In many practical situations, one or more desired signals need to be recovered from the mixtures only. A typical example is speech recordings made in an acoustic environment in the presence of background noise and/or competing speakers. Other examples include EEG signals, passive sonar applications and cross-talk in data communications. The audio signal separation problem is sometimes referred to as The Cocktail Party Problem. When several people in the same room are conversing at the same time, it is remarkable that a person is able to choose to concentrate on one of the speakers and listen to his or her speech flow unimpeded. This ability, usually referred to as the binaural cocktail party effect, results in part from binaural (two-eared) hearing. In contrast, ...

Chan, Dominic C. B. — University of Cambridge


Improving Auditory Steady-State Response Detection Using Multichannel EEG Signal Processing

The ability to hear and process sounds is crucial. For adults, the inevitable ongoing aging process reduces the quality of the speech and sounds one perceives. If this effect is allowed to evolve too far, social isolation may occur. For infants, a disability in processing sounds results in an inappropriate development of speech, language, and cognitive abilities. To reduce the handicap of hearing loss in children, it is important to detect the hearing loss early and to provide effective rehabilitation. As a result, hearing of all newborns needs to be screened. If the outcome of the screening does not indicate normal hearing, more detailed hearing assessment is required. However, standard behavioral testing is not possible, so that assessment has to rely on objective physiological techniques that are not influenced by sleep or sedation. The last few decades, the use of ...

Van Dun, Bram — KU Leuven


New approaches for EEG signal processing: Artifact EOG removal by ICA-RLS scheme and Tracks extraction method

Localizing the bioelectric phenomena originating from the cerebral cortex and evoked by auditory and somatosensory stimuli are clear objectives to both understand how the brain works and to recognize different pathologies. Diseases such as Parkinson's, Alzheimer's, schizophrenia and epilepsy are intensively studied to find a cure or accurate diagnosis. Epilepsy is considered the disease with major prevalence within disorders with neurological origin. The recurrent and sudden incidence of seizures can lead to dangerous and possibly life-threatening situations. Since disturbance of consciousness and sudden loss of motor control often occur without any warning, the ability to predict epileptic seizures would reduce patients' anxiety, thus considerably improving quality of life and safety. The common procedure for epilepsy seizure detection is based on brain activity monitorization via electroencephalogram (EEG) data. This process consumes a lot of time, especially in the case of long ...

Carlos Guerrero-Mosquera — University Carlos III of Madrid


Design and Evaluation of Feedback Control Algorithms for Implantable Hearing Devices

Using a hearing device is one of the most successful approaches to partially restore the degraded functionality of an impaired auditory system. However, due to the complex structure of the human auditory system, hearing impairment can manifest itself in different ways and, therefore, its compensation can be achieved through different classes of hearing devices. Although the majority of hearing devices consists of conventional hearing aids (HAs), several other classes of hearing devices have been developed. For instance, bone-conduction devices (BCDs) and cochlear implants (CIs) have successfully been used for more than thirty years. More recently, other classes of implantable devices have been developed such as middle ear implants (MEIs), implantable BCDs, and direct acoustic cochlear implants (DACIs). Most of these different classes of hearing devices rely on a sound processor running different algorithms able to compensate for the hearing impairment. ...

Bernardi, Giuliano — KU Leuven


Automated detection of epileptic seizures in pediatric patients based on accelerometry and surface electromyography

Epilepsy is one of the most common neurological diseases that manifests in repetitive epileptic seizures as a result of an abnormal, synchronous activity of a large group of neurons. Depending on the affected brain regions, seizures produce various severe clinical symptoms. There is no cure for epilepsy and sometimes even medication and other therapies, like surgery, vagus nerve stimulation or ketogenic diet, do not control the number of seizures. In that case, long-term (home) monitoring and automatic seizure detection would enable the tracking of the evolution of the disease and improve objective insight in any responses to medical interventions or changes in medical treatment. Especially during the night, supervision is reduced; hence a large number of seizures is missed. In addition, an alarm should be integrated into the automated seizure detection algorithm for severe seizures in order to help the ...

Milošević, Milica — KU Leuven


Mixed structural models for 3D audio in virtual environments

In the world of Information and communications technology (ICT), strategies for innovation and development are increasingly focusing on applications that require spatial representation and real-time interaction with and within 3D-media environments. One of the major challenges that such applications have to address is user-centricity, reflecting e.g. on developing complexity-hiding services so that people can personalize their own delivery of services. In these terms, multimodal interfaces represent a key factor for enabling an inclusive use of new technologies by everyone. In order to achieve this, multimodal realistic models that describe our environment are needed, and in particular models that accurately describe the acoustics of the environment and communication through the auditory modality are required. Examples of currently active research directions and application areas include 3DTV and future internet, 3D visual-sound scene coding, transmission and reconstruction and teleconferencing systems, to name but ...

Geronazzo, Michele — University of Padova


Robust feedback cancellation algorithms for single- and multi-microphone hearing aids

When providing the necessary amplification in hearing aids, the risk of acoustic feedback is increased due to the coupling between the hearing aid loudspeaker and the hearing aid microphone(s). This acoustic feedback is often perceived as an annoying whistling or howling. Thus, to reduce the occurrence of acoustic feedback, robust and fast-acting feedback suppression algorithms are required. The main objective of this thesis is to develop and evaluate algorithms for robust and fast-acting feedback suppression in hearing aids. Specifically, we focus on enhancing the performance of adaptive filtering algorithms that estimate the feedback component in the hearing aid microphone by reducing the number of required adaptive filter coefficients and by improving the trade-off between fast convergence and good steady-state performance. Additionally, we develop fixed spatial filter design methods that can be applied in a multi-microphone earpiece.

Schepker, Henning — University of Oldenburg


Contributions to Human Motion Modeling and Recognition using Non-intrusive Wearable Sensors

This thesis contributes to motion characterization through inertial and physiological signals captured by wearable devices and analyzed using signal processing and deep learning techniques. This research leverages the possibilities of motion analysis for three main applications: to know what physical activity a person is performing (Human Activity Recognition), to identify who is performing that motion (user identification) or know how the movement is being performed (motor anomaly detection). Most previous research has addressed human motion modeling using invasive sensors in contact with the user or intrusive sensors that modify the user’s behavior while performing an action (cameras or microphones). In this sense, wearable devices such as smartphones and smartwatches can collect motion signals from users during their daily lives in a less invasive or intrusive way. Recently, there has been an exponential increase in research focused on inertial-signal processing to ...

Gil-Martín, Manuel — Universidad Politécnica de Madrid


Non-linear Spatial Filtering for Multi-channel Speech Enhancement

A large part of human speech communication takes place in noisy environments and is supported by technical devices. For example, a hearing-impaired person might use a hearing aid to take part in a conversation in a busy restaurant. These devices, but also telecommunication in noisy environments or voiced-controlled assistants, make use of speech enhancement and separation algorithms that improve the quality and intelligibility of speech by separating speakers and suppressing background noise as well as other unwanted effects such as reverberation. If the devices are equipped with more than one microphone, which is very common nowadays, then multi-channel speech enhancement approaches can leverage spatial information in addition to single-channel tempo-spectral information to perform the task. Traditionally, linear spatial filters, so-called beamformers, have been employed to suppress the signal components from other than the target direction and thereby enhance the desired ...

Tesch, Kristina — Universität Hamburg


Multimodal epileptic seizure detection : towards a wearable solution

Epilepsy is one of the most common neurological disorders, which affects almost 1% of the population worldwide. Anti-epileptic drugs provide adequate treatment for about 70% of epilepsy patients. The remaining 30% of the patients continue to have seizures, which drastically affects their quality of life. In order to obtain efficacy measures of therapeutic interventions for these patients, an objective way to count and document seizures is needed. However, in an outpatient setting, one of the major problems is that seizure diaries kept by patients are unreliable. Automated seizure detection systems could help to objectively quantify seizures. Those detection systems are typically based on full scalp Electroencephalography (EEG). In an outpatient setting, full scalp EEG is of limited use because patients will not tolerate wearing a full EEG cap for long time periods during daily life. There is a need for ...

Vandecasteele, Kaat — KU Leuven


Deep Learning Techniques for Visual Counting

The explosion of Deep Learning (DL) added a boost to the already rapidly developing field of Computer Vision to such a point that vision-based tasks are now parts of our everyday lives. Applications such as image classification, photo stylization, or face recognition are nowadays pervasive, as evidenced by the advent of modern systems trivially integrated into mobile applications. In this thesis, we investigated and enhanced the visual counting task, which automatically estimates the number of objects in still images or video frames. Recently, due to the growing interest in it, several Convolutional Neural Network (CNN)-based solutions have been suggested by the scientific community. These artificial neural networks, inspired by the organization of the animal visual cortex, provide a way to automatically learn effective representations from raw visual data and can be successfully employed to address typical challenges characterizing this task, ...

Ciampi Luca — University of Pisa


Automated audio captioning with deep learning methods

In the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the ...

Labbé, Étienne — IRIT

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.