Publication Date

4-1-2022

Comments

Technical Report: UTEP-CS-22-51

To appear in International Journal of Parallel, Emergent and Distributed Systems

Abstract

In many practical situations, deep neural networks work better than the traditional "shallow" ones, however, in some cases, the shallow neural networks lead to better results. At present, deciding which type of neural networks will work better is mostly done by trial and error. It is therefore desirable to come up with some criterion of when deep learning is better and when shallow is better. In this paper, we argue that this depends on whether the corresponding situation has natural symmetries: if it does, we expect deep learning to work better, otherwise we expect shallow learning to be more effective. Our general qualitative arguments are strengthened by the fact that in the simplest case, the connection between symmetries and effectiveness of deep learning can be theoretically proven.

Share

COinS