Contrastive Reasoning in Neural Networks (2021)
Abstract / truncated to 115 words
The objective of the dissertation is to rethink the inductive nature of reasoning in neural networks by providing contextual explanations to a network’s decision and addressing the network's robustness capabilities. Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify dependencies within this manifold is termed as inductive feed-forward inference. This is a classical cause-to-effect inference model that is widely used because of its simple mathematical functionality and ease of operation. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we use an effect-to-cause inference model ...
explainable machine learning – abductive reasoning – contrastive explanations – robust image recognition – image quality assessment
Information
- Author
- Prabhushankar, Mohit
- Institution
- Georgia Institute of Technology
- Supervisor
- Publication Year
- 2021
- Upload Date
- March 12, 2025
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.