One of the main objectives of statistics is to estimate the parameters of a probability distribution based on a sample taken from this distribution. Of course, since the sample is finite, the estimate X is, in general, different from the actual value x of the corresponding parameter. What we can require is that the corresponding estimate is unbiased, i.e., that the mean value of the difference X - x is equal to 0: E[X] = x. In some problems, unbiased estimates are not possible. We show that in some such problems, it is possible to have interval unbiased estimates, i.e., interval-valued estimates [L,R] for which x is in [E[L], E[R]]. In some such cases, it is possible to have asymptotically sharp estimates, for which the interval [E[L], E[R]] is the narrowest possible.