Minimax lower bounds for concept learning state, for example, that for
each sample size n and learning rule g(n), there exists a distributio
n of the observation X and a concept C to be learnt such that the expe
cted error of g(n) is at least a constant times V/n, where V is the VC
dimension of the concept class. However, these bounds do not tell any
thing about the rate of decrease of the error for a fixed distribution
-concept pair. In this paper we investigate minimax lower bounds in su
ch a (stronger) sense. We show that for several natural k-parameter co
ncept classes, including the class of linear halfspaces, the class of
balls, the class of polyhedra with a certain number of faces, and a cl
ass of neural networks, for any sequence of learning rules {g(n)}, the
re exists a fixed distribution of X and a fixed concept C such that th
e expected error is larger than a constant times k/n for infinitely ma
ny n. We also obtain such strong minimax lower bounds for the tail dis
tribution of the probability of error, which extend the corresponding
minimax lower bounds.