It is generally believed that the support vector machine (SVM) optimizes th
e generalization error and outperforms other learning machines. We show ana
lytically, by concrete examples in the one dimensional case, that the SVM d
oes improve the mean and standard deviation of the generalization error by
a constant factor, compared to the worst learning machine. Our approach is
in terms of extreme value theory and both the mean and variance of the gene
ralization error are calculated exactly for all cases considered. We propos
e a new version of the SVM (scaled SVM) which can further reduce the mean o
f the generalization error of the SVM.