Authors

Gang Xiang

Publication Date

7-2006

Comments

UTEP-CS-06-33

Abstract

Many statistical characteristics y=f(x1,...,xn) are continuous, symmetric, and either concave or convex; examples include population variance V=(1/n)*(x1^2+...+xn^2)-E^2 (where E=(1/n)*(x1+...+xn), Shannon's entropy S=-p1*log(p1)-..-pn*log(pn), and many other characteristics. In practice, often, we often only know the intervals Xi=[xi-,xi+] that contain the (unknown) actual inputs xi. Since different values xi from Xi lead, in general, to different values of f(x1,...,xn), we need to find the range Y={f(x1,...,xn):x1 in X1,...,xn in Xn}, i.e., the maximum and the minimum of f(x1,...,xn) over the box X1 x ... x Xn. It is known that for convex functions, there exists a feasible (polynomial-time) algorithm for computing its minimum, but computing its maximum is, in general, NP-hard. It is therefore desirable to find feasible algorithms that compute the maximum in practically reasonable situations. For variance and (negative) entropy, such algorithms are known for the case when the inputs satisfy the following subset property}: [xi-,x-+] is not contained in (xj-,xj+) for all i and j. In this paper, we show that these algorithms can be extended to the case of general symmetric convex characteristics.

Share

COinS