To gauge the accuracy of a measuring instrument, engineers analyze possible factors contributing to the instrument's inaccuracy. In addition to known factors, however, there are usually unknown factors which also contribute to the instrument's inaccuracy. To properly gauge the instrument's accuracy -- and thus, to make sure that we do not compromise our safety by underestimating the inaccuracy -- we need to also take these "unknown unknowns" into account. In practice, this is usually done by multiplying the original estimate for inaccuracy by a "safety" factor of 2. In this paper, we provide a possible theoretical explanation for this empirical factor.