Adaptive Algorithms and Variable Structures for Distributed Estimation
The analysis and design of new non-centralized learning algorithms for potential application in distributed adaptive estimation is the focus of this thesis. Such algorithms should be designed to have low processing requirement and to need minimal communication between the nodes which would form a distributed network. They ought, moreover, to have acceptable performance when the nodal input measurements are coloured and the environment is dynamic. Least mean square (LMS) and recursive least squares (RLS) type incremental distributed adaptive learning algorithms are first introduced on the basis of a Hamiltonian cycle through all of the nodes of a distributed network. These schemes require each node to communicate only with one of its neighbours during the learning process. An original steady-steady performance analysis of the incremental LMS algorithm is performed by exploiting a weighted spatial-temporal energy conservation formulation. This analysis confirms that the effect of varying signal-to-noise ratio (SNR) in the measurements at the nodes within the network is equalized by the learning algorithm. A novel incremental affine projection algorithm (APA) is then proposed to ameliorate the problem of ill-convergence in adaptive filters with coloured inputs which are controlled by the incremental LMS algorithm. The computational and memory costs of this incremental APA algorithm are shown for a range of filter lengths to be lower than those of an incremental RLS algorithm. The transient and steady-state performances of the incremental APA algorithm are evaluated in detail through analytical and simulation studies. The nature of the inter-node collaboration within the incremental APA algorithm is further enhanced through the adoption of a diffusion-based cooperation protocol. The concept of variable tap-length (VT) adaptive filtering is next introduced to facilitate structural change during learning. The monotonically non-increasing nature of the converged difference between the segmented mean square error (MSE) of a filter formed from a number of the initial coefficients of an adaptive filter and the MSE of the full adaptive filter, as a function of the tap-length of the adaptive filter, is confirmed through analysis. An innovative strategy for adaptation of the leakage factor, a key parameter in the fractional tap-length (FT) learning algorithm, is proposed to ensure the converged tap-length can be used to calculate the true length of the unknown system for a range of initial tap-lengths. For sub-Gaussian noise conditions, a VT adaptive filtering algorithm which exploits both second and fourth order statistics is also presented. Finally, VT adaptive filters are introduced for the first time into distributed adaptive estimation. In particular, an FT learning algorithm is used to determine the length of the adaptive filter within each node in parallel with the scheme to calculate the coefficients of the filter. The efficacy of this technique is confirmed through analytical and simulation studies of the steady-state performance.
