Abstract / truncated to 115 words (read the full abstract)

The first part of this dissertation considers distributed learning problems over networked agents. The general objective of distributed adaptation and learning is the solution of global, stochastic optimization problems through localized interactions and without information about the statistical properties of the data. Regularization is a useful technique to encourage or enforce structural properties on the resulting solution, such as sparsity or constraints. A substantial number of regularizers are inherently non-smooth, while many cost functions are differentiable. We propose distributed and adaptive strategies that are able to minimize aggregate sums of objectives. In doing so, we exploit the structure of the individual objectives as sums of differentiable costs and non-differentiable regularizers. The resulting algorithms are adaptive ... toggle 11 keywords

distributed optimization distributed learning non-convex optimization stochastic gradient proximal gradient saddle point gradient noise graph learning adaptive online learning smoothing.

Information

Author
Vlaski, Stefan
Institution
University of California, Los Angeles
Supervisor
Publication Year
2019
Upload Date
Oct. 24, 2022

First few pages / click to enlarge

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.