Publication Date

7-1-2023

Comments

Technical Report: UTEP-CS-23-42

Abstract

Often, once we have trained a neural network to estimate the value of a quantity y based on the available values of inputs x1, ..., xn, we learn to measure the values of an additional quantity that have some influence on y. In such situations, it is desirable to re-train the neural network, so that it will be able to take this extra value into account. A straightforward idea is to add a new input to the first layer and to update all the weights based on the patterns that include the values of the new input. The problem with this straightforward idea is that while the result is a minor improvement, such re-training will take a lot of time, almost as much as the original training. In this paper, we show, both theoretically and experimentally, that in such situations, we can speed up re-training -- practically without decreasing resulting accuracy -- if we only update some weights.

Share

COinS