Why A Model Produced by Training a Neural Network Is Often More Computationally Efficient than a Nonlinear Regression Model: A Theoretical Explanation
Many real-life dependencies can be reasonably accurately described by linear functions. If we want a more accurate description, we need to take non-linear terms into account. To take nonlinear terms into account, we can either explicitly add quadratic terms to the regression equation, or, alternatively, we can train a neural network with a non-linear activation function. At first glance, regression algorithms lead to simpler expressions, but in practice, often, a trained neural network turns out to be a more computationally efficient way of predicting the corresponding dependence. In this paper, we provide a reasonable explanation for this empirical fact.