Date of Award

2023-12-01

Degree Name

Master of Science

Department

Computer Science

Advisor(s)

Martine Ceberio

Abstract

There is a growing number of applications using neural networks for making decisions. However, there is a general lack of understanding of how neural networks work. Neural networks have even been described as black boxes which has led to a lack of trust in artificially intelligent programs. To remedy this, explainable artificial intelligence has risen as a means to validate the decision-making processes and the results of computer programs that use artificial intelligence. The work in this masterâ??s thesis is our contribution to explainable artificial intelligence, focusing on neural networks with the goal of helping users make more sense of the algorithms subsumed by the network. Our research deals with the visualization of node activations and weights within a neural network to see how data travels through the network to make decisions.Additionally, when using neural networks, it is not clear which structure should be used or how the structure of the network influences its performance. Resulting in a network with unnecessary nodes or connections that contribute to its space and computational complex- ity. In this work, we look at the problem of identifying edges and nodes in the network that we can remove without compromising its overall performance. Some of the pruning techniques that we have explored are pruning never-activated nodes, mostly unactivated nodes, always-activated nodes, and k nodes per layer. The results from pruning never activated and mostly unactivated nodes show similar accuracy to the original accuracy of the network, with an increase in accuracy after retraining. Pruning always activated nodes and k nodes per layer results in comparable accuracy after some retraining.

Language

en

Provenance

Recieved from ProQuest

File Size

73 p.

File Format

application/pdf

Rights Holder

Juan Puebla

Share

COinS