Publication Date

7-1-2022

Comments

Technical Report: UTEP-CS-22-90

Abstract

At present, the most efficient machine learning techniques is deep learning, with neurons using Rectified Linear (ReLU) activation function s(z) = max(0,z), in many cases, the use of Rectified Power (RePU) activation functions (s(z))^p -- for some p -- leads to better results. In this paper, we explain these results by proving that RePU functions (or their "leaky" versions) are optimal with respect that all reasonable optimality criteria.

COinS