Publication Date

6-1-2023

Comments

Technical Report: UTEP-CS-23-22a

To appear in Proceedings of the 20th World Congress of the International Fuzzy Systems Association IFSA'2023, Daegu, South Korea, August 20-24, 2023.

Abstract

One of the main limitations of many current AI-based decision-making systems is that they do not provide any understandable explanations of how they came up with the produced decision. Taking into account that these systems are not perfect, that their decisions are sometimes far from good, the absence of an explanation makes it difficult to separate good decisions from suspicious ones. Because of this, many researchers are working on making AI explainable. In some applications areas -- e.g., in chess -- practitioners get an impression that there is a limit to understandability, that some decisions remain inhuman -- not explainable. In this paper, we use fuzzy techniques to analyze this situation. We show that for relatively simpler systems, explainable model are indeed optimal approximate descriptions, while for more complex systems, there is a limit on the adequacy of explainable models.

tr23-22.pdf (119 kB)
Original file

Share

COinS