Publication Date



Technical Report: UTEP-CS-24-31


Any data processing starts with measurement results. Measurement results are never absolutely accurate. Because of this measurement uncertainty, the results of processing measurement results are, in general, somewhat different from what we would have obtained if we knew the exact values of the measured quantities. To make a decision based on the result of data processing, we need to know how accurate is this result, i.e., we need to propagate the measurement uncertainty through the data processing algorithm. There are many techniques for uncertainty propagation. Usually, they involve applying the same data processing algorithm several times to appropriately modified data. As a result, the computation time for uncertainty propagation is several times larger than data processing itself. This is a very critical issue for data processing algorithms that take a lot of computational steps -- such as modern deep learning-based AI techniques, for which a several-times increase in computation time is not feasible. At first glance, the situation may seem hopeless. Good news is that there is another problem with modern AI algorithms: usually, once they learn, their weights are frozen, and they stop learning -- as a result, the quality of their answers decreases with time. This is good news because, as we show, solving the second problem -- by allowing at least one learning step for each new use of the model -- helps to also come up with an efficient uncertainty propagation algorithm.