A convex optimization algorithm for sparse representation and applications in classification problems
Abstract
In pattern recognition and machine learning, a classification problem refers to finding an algorithm for assigning a given input data into one of several categories. Many natural signals are sparse or compressible in the sense that they have short representations when expressed in a suitable basis. Motivated by the recent successful development of algorithms for sparse signal recovery, we apply the selective nature of sparse representation to perform classification. Any test sample is represented in an overcomplete dictionary with the training sample as base elements. A given test sample can be expressed as a linear combination of only those training samples belonging to the same class, therefore providing a naturally sparse representation. Finding the correct coefficients in a given basis or training dataset, allows us to identify the correct category or class of any given input that needs to be categorized. In order to find such sparse linear representation, we implement an l1-minimization algorithm. This methodology overcomes the lack of robustness with respect to outliers, and in contrast to other classification algorithms, no model selection dependence is involved in the optimization method. The minimization algorithm is a convex relaxation-like algorithm that has been proven to efficiently recover sparse signals. To study its performance, the proposed method is applied to several test datasets with different number of features and samples. A dimensionality reduction technique is also proposed and implemented as part of the classification process.
Subject Area
Applied Mathematics|Mathematics|Computer science
Recommended Citation
Sanchez Arias, Reinaldo, "A convex optimization algorithm for sparse representation and applications in classification problems" (2013). ETD Collection for University of Texas, El Paso. AAI3565935.
https://scholarworks.utep.edu/dissertations/AAI3565935