Publication Date

12-1-2023

Comments

Technical Report: UTEP-CS-23-67

Abstract

While modern deep-learning neural networks are very successful, sometimes they make mistakes, and since their results are "black boxes" -- no explanation is provided -- it is difficult to determine which recommendations are erroneous. It is therefore desirable to make the resulting computations explainable, i.e., to describe their results by using commonsense rules. In this paper, we use "fuzzy" techniques -- techniques developed by Lotfi Zadeh to deal with commonsense rules formulated by using imprecise ("fuzzy") words from natural language -- to show that such a rule-based representation is always possible. Our result does not yet provide the desired explainability, since it requires two rules for each neuron, and thus millions and billions of rules for a network with millions and billions of neurons. However, we believe that this is a useful first step towards the desired explainability.

Share

COinS