Design and evaluation of noise reduction techniques for binaural hearing aids

One of the main complaints of hearing aid users is their degraded speech understanding in noisy environments. Modern hearing aids therefore include noise reduction techniques. These techniques are typically designed for a monaural application, i.e. in a single device. However, the majority of hearing aid users currently have hearing aids at both ears in a so-called bilateral fitting, as it is widely accepted that this leads to a better speech understanding and user satisfaction. Unfortunately, the independent signal processing (in particular the noise reduction) in a bilateral fitting can destroy the so-called binaural cues, namely the interaural time and level differences (ITDs and ILDs) which are used to localize sound sources in the horizontal plane. A recent technological advance are so-called binaural hearing aids, where a wireless link allows for the exchange of data (or even microphone signals) between the ...

Cornelis, Bram — KU Leuven


Preserving binaural cues in noise reduction algorithms for hearing aids

Hearing aid users experience great difficulty in understanding speech in noisy environments. This has led to the introduction of noise reduction algorithms in hearing aids. The development of these algorithms is typically done monaurally. However, the human auditory system is a binaural system, which compares and combines the signals received by both ears to perceive a sound source as a single entity in space. Providing two monaural, independently operating, noise reduction systems, i.e. a bilateral configuration, to the hearing aid user may disrupt binaural information, needed to localize sound sources correctly and to improve speech perception in noise. In this research project, we first examined the influence of commercially available, bilateral, noise reduction algorithms on binaural hearing. Extensive objective and perceptual evaluations showed that the bilateral adaptive directional microphone (ADM) and the bilateral fixed directional microphone, two of the most ...

Van den Bogaert, Tim — Katholieke Universiteit Leuven


Binaural Beamforming Algorithms and Parameter Estimation Methods Exploiting External Microphones

In everyday speech communication situations undesired acoustic sources, such as competing speakers and background noise, frequently lead to a decreased speech intelligibility. Over the last decades, hearing devices have evolved from simple sound amplification devices to more sophisticated devices with complex functionalities such as multi-microphone speech enhancement. Binaural beamforming algorithms are spatial filters that exploit the information captured by multiple microphones on both sides of the head of the listener. Besides reducing the undesired sources, another important objective of a binaural beamforming algorithm is the preservation of the binaural cues of all sound sources to preserve the listener's spatial impression of the acoustic scene. The aim of this thesis is to develop and evaluate advanced binaural beamforming algorithms and to incorporate one or more external microphones in a binaural hearing device configuration. The first focus is to improve state-of-the-art binaural ...

Gößling, Nico — University of Oldenburg


Cognitive-driven speech enhancement using EEG-based auditory attention decoding for hearing aid applications

Identifying the target speaker in hearing aid applications is an essential ingredient to improve speech intelligibility. Although several speech enhancement algorithms are available to reduce background noise or to perform source separation in multi-speaker scenarios, their performance depends on correctly identifying the target speaker to be enhanced. Recent advances in electroencephalography (EEG) have shown that it is possible to identify the target speaker which the listener is attending to using single-trial EEG-based auditory attention decoding (AAD) methods. However, in realistic acoustic environments the AAD performance is influenced by undesired disturbances such as interfering speakers, noise and reverberation. In addition, it is important for real-world hearing aid applications to close the AAD loop by presenting on-line auditory feedback. This thesis deals with the problem of identifying and enhancing the target speaker in realistic acoustic environments based on decoding the auditory attention ...

Aroudi, Ali — University of Oldenburg, Germany


Digital signal processing algorithms for noise reduction, dynamic range compression, and feedback cancellation in hearing aids

Hearing loss can be caused by many factors, e.g., daily exposure to excessive noise in the work environment and listening to loud music. Another important reason can be age-related, i.e., the slow loss of hearing that occurs as people get older. In general hearing impaired people suffer from a frequency-dependent hearing loss and from a reduced dynamic range between the hearing threshold and the uncomfortable level. This means that the uncomfortable level for normal hearing and hearing impaired people suffering from so called sensorineural hearing loss remains the same but the hearing threshold and the sensitivity to soft sounds are shifted as a result of the hearing loss. To compensate for this kind of hearing loss the hearing aid should include a frequency-dependent and a level-dependent gain. The corresponding digital signal processing (DSP) algorithm is referred to as dynamic range ...

Ngo, Kim — KU Leuven


Spatio-Temporal Speech Enhancement in Adverse Acoustic Conditions

Never before has speech been captured as often by electronic devices equipped with one or multiple microphones, serving a variety of applications. It is the key aspect in digital telephony, hearing devices, and voice-driven human-to-machine interaction. When speech is recorded, the microphones also capture a variety of further, undesired sound components due to adverse acoustic conditions. Interfering speech, background noise and reverberation, i.e. the persistence of sound in a room after excitation caused by a multitude of reflections on the room enclosure, are detrimental to the quality and intelligibility of target speech as well as the performance of automatic speech recognition. Hence, speech enhancement aiming at estimating the early target-speech component, which contains the direct component and early reflections, is crucial to nearly all speech-related applications presently available. In this thesis, we compare, propose and evaluate existing and novel approaches ...

Dietzen, Thomas — KU Leuven


Speech derereverberation in noisy environments using time-frequency domain signal models

Reverberation is the sum of reflected sound waves and is present in any conventional room. Speech communication devices such as mobile phones in hands-free mode, tablets, smart TVs, teleconferencing systems, hearing aids, voice-controlled systems, etc. use one or more microphones to pick up the desired speech signals. When the microphones are not in the proximity of the desired source, strong reverberation and noise can degrade the signal quality at the microphones and can impair the intelligibility and the performance of automatic speech recognizers. Therefore, it is a highly demanded task to process the microphone signals such that reverberation and noise are reduced. The process of reducing or removing reverberation from recorded signals is called dereverberation. As dereverberation is usually a completely blind problem, where the only available information are the microphone signals, and as the acoustic scenario can be non-stationary, ...

Braun, Sebastian — Friedrich-Alexander Universität Erlangen-Nürnberg


Adaptive filtering techniques for noise reduction and acoustic feedback cancellation in hearing aids

Understanding speech in noise and the occurrence of acoustic feedback belong to the major problems of current hearing aid users. Hence, an urgent demand exists for efficient and well-working digital signal processing algorithms that offer a solution to these issues. In this thesis we develop adaptive filtering techniques for noise reduction and acoustic feedback cancellation. Thanks to the availability of low power digital signal processors, these algorithms can be integrated in a hearing aid. Because of the ongoing miniaturization in the hearing aid industry and the growing tendency towards multi-microphone hearing aids, robustness against imperfections such as microphone mismatch, has become a major issue in the design of a noise reduction algorithm. In this thesis we propose multimicrophone noise reduction techniques that are based on multi-channel Wiener filtering (MWF). Theoretical and experimental analysis demonstrate that these MWF-based techniques are less ...

Spriet, Ann — Katholieke Universiteit Leuven


Dereverberation and noise reduction techniques based on acoustic multi-channel equalization

In many hands-free speech communication applications such as teleconferencing or voice-controlled applications, the recorded microphone signals do not only contain the desired speech signal, but also attenuated and delayed copies of the desired speech signal due to reverberation as well as additive background noise. Reverberation and background noise cause a signal degradation which can impair speech intelligibility and decrease the performance for many signal processing techniques. Acoustic multi-channel equalization techniques, which aim at inverting or reshaping the measured or estimated room impulse responses between the speech source and the microphone array, comprise an attractive approach to speech dereverberation since in theory perfect dereverberation can be achieved. However in practice, such techniques suffer from several drawbacks, such as uncontrolled perceptual effects, sensitivity to perturbations in the measured or estimated room impulse responses, and background noise amplification. The aim of this thesis ...

Kodrasi, Ina — University of Oldenburg


Distributed Signal Processing for Binaural Hearing Aids

Over the last centuries, hearing aids have evolved from crude and bulky horn-shaped instruments to lightweight and almost invisible digital signal processing devices. While most of the research has focused on the design of monaural apparatus, the use of a wireless link has been recently advocated to enable data transfer between hearing aids such as to obtain a binaural system. The availability of a wireless link offers brand new perspectives but also poses great technical challenges. It requires the design of novel signal processing schemes that address the restricted communication bitrates, processing delays and power consumption limitations imposed by wireless hearing aids. The goal of this dissertation is to address these issues at both a theoretical and a practical level. We start by taking a distributed source coding view on the problem of binaural noise reduction. The proposed analysis allows ...

Roy, Olivier — EPFL


Spherical Microphone Array Processing for Acoustic Parameter Estimation and Signal Enhancement

In many distant speech acquisition scenarios, such as hands-free telephony or teleconferencing, the desired speech signal is corrupted by noise and reverberation. This degrades both the speech quality and intelligibility, making communication difficult or even impossible. Speech enhancement techniques seek to mitigate these effects and extract the desired speech signal. This objective is commonly achieved through the use of microphone arrays, which take advantage of the spatial properties of the sound field in order to reduce noise and reverberation. Spherical microphone arrays, where the microphones are arranged in a spherical configuration, usually mounted on a rigid baffle, are able to analyze the sound field in three dimensions; the captured sound field can then be efficiently described in the spherical harmonic domain (SHD). In this thesis, a number of novel spherical array processing algorithms are proposed, based in the SHD. In ...

Jarrett, Daniel P. — Imperial College London


Integrating monaural and binaural cues for sound localization and segregation in reverberant environments

The problem of segregating a sound source of interest from an acoustic background has been extensively studied due to applications in hearing prostheses, robust speech/speaker recognition and audio information retrieval. Computational auditory scene analysis (CASA) approaches the segregation problem by utilizing grouping cues involved in the perceptual organization of sound by human listeners. Binaural processing, where input signals resemble those that enter the two ears, is of particular interest in the CASA field. The dominant approach to binaural segregation has been to derive spatially selective filters in order to enhance the signal in a direction of interest. As such, the problems of sound localization and sound segregation are closely tied. While spatial filtering has been widely utilized, substantial performance degradation is incurred in reverberant environments and more fundamentally, segregation cannot be performed without sufficient spatial separation between sources. This dissertation ...

Woodruff, John — The Ohio State University


Robust feedback cancellation algorithms for single- and multi-microphone hearing aids

When providing the necessary amplification in hearing aids, the risk of acoustic feedback is increased due to the coupling between the hearing aid loudspeaker and the hearing aid microphone(s). This acoustic feedback is often perceived as an annoying whistling or howling. Thus, to reduce the occurrence of acoustic feedback, robust and fast-acting feedback suppression algorithms are required. The main objective of this thesis is to develop and evaluate algorithms for robust and fast-acting feedback suppression in hearing aids. Specifically, we focus on enhancing the performance of adaptive filtering algorithms that estimate the feedback component in the hearing aid microphone by reducing the number of required adaptive filter coefficients and by improving the trade-off between fast convergence and good steady-state performance. Additionally, we develop fixed spatial filter design methods that can be applied in a multi-microphone earpiece.

Schepker, Henning — University of Oldenburg


Distributed Signal Processing Algorithms for Acoustic Sensor Networks

In recent years, there has been a proliferation of wireless devices for individual use to the point of being ubiquitous. Recent trends have been incorporating many of these devices (or nodes) together, which acquire signals and work in unison over wireless channels, in order to accomplish a predefined task. This type of cooperative sensing and communication between devices form the basis of a so-called wireless sensor network (WSN). Due to the ever increasing processing power of these nodes, WSNs are being assigned more complicated and computationally demanding tasks. Recent research has started to exploit this increased processing power in order for the WSNs to perform tasks pertaining to audio signal acquisition and processing forming so-called wireless acoustic sensor networks (WASNs). Audio signal processing poses new and unique problems when compared to traditional sensing applications as the signals observed often have ...

Szurley, Joseph C. — KU Leuven


Informed spatial filters for speech enhancement

In modern devices which provide hands-free speech capturing functionality, such as hands-free communication kits and voice-controlled devices, the received speech signal at the microphones is corrupted by background noise, interfering speech signals, and room reverberation. In many practical situations, the microphones are not necessarily located near the desired source, and hence, the ratio of the desired speech power to the power of the background noise, the interfering speech, and the reverberation at the microphones can be very low, often around or even below 0 dB. In such situations, the comfort of human-to-human communication, as well as the accuracy of automatic speech recognisers for voice-controlled applications can be signi cantly degraded. Therefore, e ffective speech enhancement algorithms are required to process the microphone signals before transmitting them to the far-end side for communication, or before feeding them into a speech recognition ...

Taseska, Maja — Friedrich-Alexander Universität Erlangen-Nürnberg

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.