D. Saad, CAPACITY OF THE SINGLE-LAYER PERCEPTRON AND MINIMAL TRAJECTORY TRAINING ALGORITHMS, Journal of physics. A, mathematical and general, 26(15), 1993, pp. 3757-3773
The entire set of binary vectors to be stored using a single-layer per
ceptron can be divided into two groups, one for which the output neuro
n state consistently equals one of the input neuron states and a secon
d for which the output neuron state consistently negates the same inpu
t neuron. The capacity of the single-layer perceptron depends on the r
atio between these two groups. This dependence is examined via statist
ical mechanical methods, producing the probability of obtaining a line
arly separable solution for a random selection of input-output relatio
ns, for a given value of the above ratio. This probability is extremel
y useful for designing recurrent neural network training algorithms. T
hese algorithms make use of the obtained results to select the most pr
obable internal representations to be realized in such nets. Moreover,
the distribution of the linearly separable binary functions enables u
s to obtain a good estimate for the total number of linearly separable
binary functions for a certain number of input neurons, a task consid
ered as a hard computational problem. Additional incentives for carryi
ng out the calculation are understanding the capacity of simple nets f
or certain types of input-output correlations and laying the foundatio
ns for analysing some constructive training algorithms such as the til
ing and upstart algorithms. All results show consistency with existing
theoretical results.