Publication Date

10-2007

Comments

Technical Report: UTEP-CS-07-54

Published in: Van-Nam Huynh, Yoshiteru Nakamori, Hiroakira Ono, Jonathan Lawry, Vladik Kreinovich, and Hung T. Nguyen (eds.), Interval/Probabilistic Uncertainty and Non-Classical Logics, Springer-Verlag, Berlin-Heidelberg-New York, 2008, pp. 57-69.

Abstract

Support Vector Machines (SVM) is one of the most widely used technique in machines leaning. After the SVM algorithms process the data and produce some classification, it is desirable to learn how well this classification fits the data. There exist several measures of fit, among them the most widely used is kernel target alignment. These measures, however, assume that the data are known exactly. In reality, whether the data points come from measurements or from expert estimates, they are only known with uncertainty. As a result, even if we know that the classification perfectly fits the nominal data, this same classification can be a bad fit for the actual values (which are somewhat different from the nominal ones). In this paper, we show how to take this uncertainty into account when estimating the quality of the resulting classification.

Share

COinS