Oriented principal component analysis for large margin classifiers

Citation
S. Bermejo et J. Cabestany, Oriented principal component analysis for large margin classifiers, NEURAL NETW, 14(10), 2001, pp. 1447-1461
Citations number
44
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEURAL NETWORKS
ISSN journal
08936080 → ACNP
Volume
14
Issue
10
Year of publication
2001
Pages
1447 - 1461
Database
ISI
SICI code
0893-6080(200112)14:10<1447:OPCAFL>2.0.ZU;2-A
Abstract
Large margin classifiers (such as MLPs) are designed to assign training sam ples with high confidence (or margin) to one of the classes. Recent theoret ical results of these systems show why the use of regularisation terms and feature extractor techniques can enhance their generalisation properties. S ince the optimal subset of features selected depends on the classification problem, but also on the particular classifier with which they are used, gl obal learning algorithms for large margin classifiers that use feature extr actor techniques are desired. A direct approach is to optimise a cost funct ion based on the margin error, which also incorporates regularisation terms for controlling capacity. These terms must penalise a classifier with the largest margin for the problem at hand. Our work shows that the inclusion o f a PCA term can be employed for this purpose. Since PCA only achieves an o ptimal discriminatory projection for some particular distribution of data, the margin of the classifier can then be effectively controlled. We also pr opose a simple constrained search for the global algorithm in which the fea ture extractor and the classifier are trained separately. This allows a deg ree of flexibility for including heuristics that can enhance the search and the performance of the computed solution. Experimental results demonstrate the potential of the proposed method. (C) 2001 Elsevier Science Ltd. All r ights reserved.