We present an algorithm for improving the accuracy of algorithms for l
earning binary concepts. The improvement is achieved by combining a la
rge number of hypotheses, each of which is generated by training the g
iven learning algorithm on a different set of examples. Our algorithm
is based on ideas presented by Schapire and represents an improvement
over his results, The analysis of our algorithm provides general upper
bounds on the resources required for learning in Valiant's polynomial
PAC learning framework, which are the best general upper bounds known
today. We show that the number of hypotheses that are combined by our
algorithm is the smallest number possible. Other outcomes of our anal
ysis are results regarding the representational power of threshold cir
cuits, the relation between learnability and compression, and a method
for parallelizing PAC learning algorithms. We provide extensions of o
ur algorithms to cases in which the concepts are not binary and to the
case where the accuracy of the learning algorithm depends on the dist
ribution of the instances. (C) 1995 Academic Press, inc.