Publication Date
6-2020
Abstract
Since in a computer, "true" is usually represented as 1 and ``false'' as 0, it is natural to represent intermediate degrees of confidence by numbers intermediate between 0 and 1; this is one of the main ideas behind fuzzy logic -- a technique that has led to many useful applications. In many such applications, the degree of confidence in A & B is estimated as the minimum of the degrees of confidence corresponding to A and B, and the degree of confidence in A \/ B is estimated as the maximum; for example, 0.5 \/ 0.3 = 0.5. It is intuitively OK that, e.g., 0.5 \/ 0.3 < 0.51 and, more generally, that 0.5 \/ 0.3 < 0.5 + ε for all ε > 0. However, intuitively, an additional argument in favor of the statement should increase our degree of confidence, i.e., we should have 0.5 < 0.5 \/ 0.3. To capture this intuitive idea, we need to extend the min-max logic from the interval [0,1] to a lexicographic-type order on a larger set. Such extension has been proposed -- and successfully used in applications -- for some propositional formulas. A natural question is: can this construction be uniquely extended to all "and"-"or" formulas? In this paper, we show that, in general, such an extension is not unique.
Comments
Technical Report: UTEP-CS-20-70