An overview of statistical learning theory

Authors
Citation
Vn. Vapnik, An overview of statistical learning theory, IEEE NEURAL, 10(5), 1999, pp. 988-999
Citations number
30
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
10
Issue
5
Year of publication
1999
Pages
988 - 999
Database
ISI
SICI code
1045-9227(199909)10:5<988:AOOSLT>2.0.ZU;2-9
Abstract
Statistical learning theory was introduced in the late 1960's, Until the 19 90's it was a purely theoretical analysis of the problem of function estima tion from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the devel oped theory mere proposed, This made statistical learning theory not only a tool for the theoretical analysis hut also a tool for creating practical a lgorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theor etical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical sta tistical paradigms and how the understanding of these conditions inspired n ew algorithmic approaches to function estimation problems. ih more detailed overview of the theory (without proofs) can be found in Vapnik (1995), In Vapnik (1998) one can find detailed description of the theory (including pr oofs).