Publication Date



Technical Report: UTEP-CS-23-58


According to decision theory, in general, to recommend the best of possible actions, we need to know, for each possible action, the probabilities of different outcomes, and we also need to know the decision maker's utility function -- that describes his/her preferences. For some pairs of probability distributions, however, we can make such a recommendation without knowing the exact form of the utility function -- e.g., in financial applications, we only need to know that a larger amount is preferable to a smaller one. Such situations, when we can make decisions based only on the information about probabilities, are known as {\it stochastic dominance.} The usual analysis of such situations is based in the idealized assumption that any difference in utility, no matter how small, is important. In reality, very small changes in utility value are irrelevant. From this viewpoint, if the utility corresponding to the distribution F2(x) is always either larger or only slightly smaller than the utility corresponding to F1(x), then we can still conclude that the second action is better (or of the same quality) than the first action. In this paper, we show how to describe such approximate stochastic dominance in precise terms.