Publication Date

3-1-2021

Comments

Technical Report: UTEP-CS-21-20a

Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 74-78.

Abstract

One of big challenges of many state-of-the-art AI techniques such as deep learning is that their results do not come with any explanations -- and, taking into account that some of the resulting conclusions and recommendations are far from optimal, it is difficult to distinguish good advice from bad one. It is therefore desirable to come up with explainable AI. In this paper, we argue that fuzzy techniques are a proper way to this explainability, and we also analyze which fuzzy techniques are most appropriate for this purpose. Interestingly, it turns out that the answer depends on what problem we are solving: e.g., different "and"- and "or"-operations are preferable when we are controlling a single object and when we are controlling a group of objects.

tr21-20.pdf (102 kB)
Original file

Share

COinS