To find an exact form for the generalization error of a learning machi
ne is an open problem, even in the simplest case: simple perceptron le
arning. We introduce a new approach to tackle the problem. The general
ization error of the simple perceptron is expressed as a linear combin
ation of extreme values of inputs. With the help of extreme value theo
ry in statistics we then obtain an exact form of the generalization er
ror of the simple perceptron in the case of the worst learning. Genera
lization errors of the higher-order perceptron taking the form of an i
nverse power law in the number of examples are also considered.