Publication Date

9-1-2024

Comments

Technical Report: UTEP-CS-24-50

Abstract

In many practically useful numerical computations, training-and-then-using a neural network turned out to be a much faster alternative than running the original computations. When we applied a similar idea to take into account interval uncertainty, we encountered two unexpected results: (1) that while for numerical computations, it is usually better to represent an interval by its midpoint and half-width, for neural networks, it is more efficient to represent an interval by its endpoints, and (2) that while usually, it is better to train a neural network on the whole data processing algorithm, in our problems, it turned out to be more efficient to train several subnetworks on subtasks and then combine their results. In this paper, we provide a theoretical explanation for these unexpected results.

Share

COinS