Publication Date

2-1-2022

Comments

Technical Report: UTEP-CS-22-14

Abstract

In high performance computing, when we process a large amount of data, we do not have much information about the dependence between measurement errors corresponding to different inputs. To gauge the uncertainty of the result of data processing, the two usual approaches are: the interval approach, when we consider the worst-case scenario in which all measurement errors are strongly correlated, and the probabilistic approach, when we assume that all these errors are independent. The problem is that usually, the interval approach leads to too pessimistic, too large uncertainty estimates, while the probabilistic approach often underestimates the resulting uncertainty. To get realistic estimates, it is therefore desirable to have techniques intermediate between interval and probabilistic ones. In this paper, we propose such techniques based on the assumption that, in each practical situation, there is an upper bound 0 ≤ b ≤ 1 on the absolute value of all correlations -- the bound that needs to be experimentally determined. For b = 0, we get probabilistic estimates, for b = 1, we get interval estimates, and for intermediate values b, we get the desired intermediate techniques. We also provide efficient algorithms for implementing the new techniques.

Share

COinS