For a normal distribution, the probability density is everywhere positive, so in principle, all real numbers are possible. In reality, the probability that a random variable is far away from the mean is so small that this possibility can be often safely ignored. Usually, a small real number k is picked (e.g., 2 or 3); then, with a probability P0(k)~1 (depending on k), the normally distributed random variable with mean a and standard deviation sigma belongs to the interval A=[a-k*sigma,a+k*sigma].
The actual error distribution may be non-Gaussian; hence, the probability P(k) that a random variable belongs to A differs from P0(k). It is desirable to select k for which the dependence of P0(k) on the distribution is the smallest possible. Empirically, this dependence is the smallest for k from the interval [1.5,2.5]. In this paper, we give a theoretical explanation for this empirical result.