Why Rectified Power (RePU) Activation Functions Are Efficient in Deep Learning: A Theoretical Explanation
Technical Report: UTEP-CS-22-90
Abstract
At present, the most efficient machine learning techniques is deep learning, with neurons using Rectified Linear (ReLU) activation function s(z) = max(0,z), in many cases, the use of Rectified Power (RePU) activation functions (s(z))^p -- for some p -- leads to better results. In this paper, we explain these results by proving that RePU functions (or their "leaky" versions) are optimal with respect that all reasonable optimality criteria.
This paper has been withdrawn.