Publication Date

6-1-2023

Comments

Technical Report: UTEP-CS-23-24

Abstract

A reasonable way to make AI results explainable is to approximate the corresponding deep-learning-generated function by a simple expression formed by fuzzy operations. Experiments on real data show that out of all easy-to-compute fuzzy operations, the best approximation is attained if we use an operation a + b − 0.5 ( limited to the interval [0,1]$. In this paper, we provide a possible theoretical explanation for this empirical result.

Share

COinS