Publication Date
1-2010
Abstract
Measurement results (and, more generally, estimates) are never absolutely accurate: there is always an uncertainty, the actual value x is, in general, different from the estimate X. Sometimes, we know the probability of different values of the estimation error dx=X-x, sometimes, we only know the interval of possible values of dx, sometimes, we have interval bounds on the cdf of dx. To compare different measuring instruments, it is desirable to know which of them brings more information - i.e., it is desirable to gauge the amount of information. For probabilistic uncertainty, this amount of information is described by Shannon's entropy; similar measures can be developed for interval and other types of uncertainty. In this paper, we analyze the computational complexity of the problem of estimating information amount under different types of uncertainty.
Original file: UTEP-CS-09-37
Comments
Technical Report: UTEP-CS-09-37a
Published in International Journal of General Systems, 2010, Vol. 39, No. 4, pp. 349-378.