One of the main limitations of many current AI-based decision-making systems is that they do not provide any understandable explanations of how they came up with the produced decision. Taking into account that these systems are not perfect, that their decisions are sometimes far from good, the absence of an explanation makes it difficult to separate good decisions from suspicious ones. Because of this, many researchers are working on making AI explainable. In some applications areas -- e.g., in chess -- practitioners get an impression that there is a limit to understandability, that some decisions remain inhuman -- not explainable. In this paper, we use fuzzy techniques to analyze this situation. We show that for relatively simpler systems, explainable model are indeed optimal approximate descriptions, while for more complex systems, there is a limit on the adequacy of explainable models.