Representation Learning in Distributed Networks (2022)
Abstract / truncated to 115 words
The effectiveness of machine learning (ML) in today's applications largely depends on the goodness of the representation of data used within the ML algorithms. While the massiveness in dimension of modern day data often requires lower-dimensional data representations in many applications for efficient use of available computational resources, the use of uncorrelated features is also known to enhance the performance of ML algorithms. Thus, an efficient representation learning solution should focus on dimension reduction as well as uncorrelated feature extraction. Even though Principal Component Analysis (PCA) and linear autoencoders are fundamental data preprocessing tools that are largely used for dimension reduction, when engineered properly they can also be used to extract uncorrelated features. At the ...
principal component analysis – dimension reduction – representation learning – distributed learning – convergence – oja's rule – krasulina's method
Information
- Author
- Gang, Arpita
- Institution
- Rutgers University-New Brunswick
- Supervisor
- Publication Year
- 2022
- Upload Date
- Feb. 6, 2023
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.