Distributed Stochastic Optimization in Non-Differentiable and Non-Convex Environments (2019)
Abstract / truncated to 115 words
The first part of this dissertation considers distributed learning problems over networked agents. The general objective of distributed adaptation and learning is the solution of global, stochastic optimization problems through localized interactions and without information about the statistical properties of the data. Regularization is a useful technique to encourage or enforce structural properties on the resulting solution, such as sparsity or constraints. A substantial number of regularizers are inherently non-smooth, while many cost functions are differentiable. We propose distributed and adaptive strategies that are able to minimize aggregate sums of objectives. In doing so, we exploit the structure of the individual objectives as sums of differentiable costs and non-differentiable regularizers. The resulting algorithms are adaptive ...
distributed optimization – distributed learning – non-convex optimization – stochastic gradient – proximal gradient – saddle point – gradient noise – graph learning – adaptive – online learning – smoothing.
Information
- Author
- Vlaski, Stefan
- Institution
- University of California, Los Angeles
- Supervisor
- Publication Year
- 2019
- Upload Date
- Oct. 24, 2022
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.