Publication Date

10-1-2024

Comments

Technical Report: UTEP-CS-24-51

Abstract

At present, the most successful AI technique is deep learning -- the use of neural networks that consist of multiple layers. Interestingly, it is well known that neural networks with two data processing layers are sufficient -- in the sense that they can approximate any function with any given accuracy. Because of this, until reasonably recently, researchers and practitioners used such networks. However, recently it turned out, somewhat unexpectedly, that using three or more data processing layers -- i.e., using what is called deep learning -- makes the neural networks much more efficient. In this paper, on numerous examples from AI and from beyond AI, we show that this is a general phenomenon: two is enough but three or more is better. In many examples, there is a specific explanation for this phenomenon. However, the fact that this phenomenon is universal makes us conjecture that there is a general explanation for this phenomenon -- and we provide a possible explanation.

Share

COinS