Publication Date




Short version published in Proceedings of the Workshop on State-of-the-Art in Scientific Computing PARA'04, Lyngby, Denmark, June 20-23, 2004, Vol. 1, pp. 43-49; extended version published in Jack Dongarra, Kaj Madsen, and Jerzy Wasniewski (eds.), PARA'04 Workshop on State-of-the-Art in Scientific Computing, Springer Lecture Notes in Computer Science, 2006, Vol. 3732, pp. 75-82.


In many problems from science and engineering, the measurements are reasonably accurate, so we can use linearization (= sensitivity analysis) to describe the effect of measurement errors on the result of data processing. In many practical cases, the measurement accuracy is not so good, so, to get a good estimate of the resulting error, we need to take quadratic terms into consideration - i.e., in effect, approximate the original algorithm by a quadratic function. The problem of estimating the range of a quadratic function is NP-hard, so, in the general case, we can only hope for a good heuristic. Traditional heuristic is similar to straightforward interval computations: we replace each operation with numbers with the corresponding operation of interval arithmetic (or of the arithmetic that takes partial probabilistic information into consideration). Alternatively, we can first diagonalize the quadratic matrix -- and then apply the same approach to the result of diagonalization. Which heuristic is better? We show that sometimes, the traditional heuristic is better; sometimes, the new approach is better; asymptotically, which heuristic is better depends on how fast, when sorted in decreasing order, the eigenvalues decrease.

tr04-07a.pdf (151 kB)
Updated version: UTEP-CS-04-07a

tr04-07.pdf (135 kB)
Original file: UTEP-CS-04-07