Why Rectified Power (RePU) Activation Functions Are Efficient in Deep Learning: A Theoretical Explanation

Laxman Bokati, The University of Texas at El Paso
Vladik Kreinovich, The University of Texas at El Paso
Joseph Baca, The University of Texas at El Paso
Natasha Rovelli, The University of Texas at El Paso

Technical Report: UTEP-CS-22-90

Abstract

At present, the most efficient machine learning techniques is deep learning, with neurons using Rectified Linear (ReLU) activation function s(z) = max(0,z), in many cases, the use of Rectified Power (RePU) activation functions (s(z))^p -- for some p -- leads to better results. In this paper, we explain these results by proving that RePU functions (or their "leaky" versions) are optimal with respect that all reasonable optimality criteria.