Stability of Coupled Adaptive Filters
Nowadays, many disciplines in science and engineering deal with problems for which a solution relies on knowledge about the characteristics of one or more given systems that can only be ascertained based on restricted observations. This requires the fitting of an adequately chosen model, such that it ?best? conforms to a set of measured data. Depending on the context, this fitting procedure may resort to a huge amount of recorded data and abundant numerical power, or contrarily, to only a few streams of samples, which have to be processed on the fly at low computational cost. This thesis, exclusively focuses on the latter scenario. It specifically studies unexpected behaviour and reliability of the widely spread and computationally highly efficient class of gradient type algorithms. Additionally, special attention is paid to systems that combine several of them. Chapter 3 is dedicated to so called asymmetric algorithms. These are gradient type algorithms that do not employ the (commonly used) unaltered input vector for regression. In a first step, it is shown that for such algorithms, the mapping matrix of the underlying homogeneous recursion has one singular value that is larger than one. This entails the risk of parameter divergence. Restricting to the most prominent subclass of asymmetric algorithms, the least-mean-squares (LMS) algorithm with matrix step-size, simulation experiments demonstrate that such divergence does not occur for all step-size matrices, even under worst case conditions. Motivated by this observation, the first part of this chapter dissects this phenomenon based on geometric arguments and comes to the novel insight that persistently changing eigenspaces of the step-size matrix are at the core of this worst case parameter divergence. In the second part, analytic as well as numeric methods are derived that allow to intentionally provoke this type of divergence, in order to assess specific algorithms. A combination of arbitrarily many symmetric algorithms, i.e., conventional LMS algorithms, is addressed in Chapter 4. It considers a structure of such algorithms that mutually interfere with each other via a linear memoryless coupling among their individual a priori errors. Conditions for `2-stability as well as boundedness of the parameter error are derived. The primer are obtained by the solution of a linear system of inequalities, resorting to the theory of M-matrices. The latter are compactly stated by means of the Khatri-Rao matrix product. Finally, a practically relevant case of coupling is analysed, where all of the individual adaptive schemes employ the same update error. This situation is found to be equivalent to an LMS algorithm with matrix step-size, making it accessible to the findings of Chapter 3. The such obtained theoretic apparatus is applied in Chapter 5 to types of adaptive systems that are encountered in real life. First, for an adaptive Wiener model in a configuration that is typically used in digital pre-distortion of microwave power amplifiers, consisting of a linear filter followed by a memoryless non-linearity, a condition for `2-stability and boundedness of its parameter error is identified. Resorting to the knowledge gained about asymmetric algorithms, this boundedness is then found to be extendible to less restrictive and practically more feasible constraints. Then, the multichannel filtered-x LMS is studied in context of active noise control, leading to sufficient bounds for `2-stability and boundedness of its parameter error. Finally, the theory of Chapters 3 and 4 is harnessed to confirm that parameter convergence of an arbitrarily sized multilayer perceptron trained by the backpropagation algorithm can barely be ensured and strongly depends on the quality of its parameter initialisation.
