We study the problem of determining, for a class of functions H. whether an
unknown target function fis contained in H or is "far" from any function i
n H. Thus, in contrast to problems of learning, when we must construct a go
od approximation to f in H on the basis of sample data, in problems of test
ing we are only required to determine the existence of a good approximation
. Our main results demonstrate that, over the domain [0, 1](d) for constant
d the number of examples required for testing grows only as O(s(1,2+delta)
) (where delta is any small constant), for both decision trees of size s an
d a special class of neural networks with s hidden units. This is in contra
st to the Omega (s) examples required for learning these same classes. Our
tests are based on combinatorial constructions demonstrating that these cla
sses can be approximated by small classes of coarse partitions of space and
rely on repeated application of the well-known birthday paradox. (C) 2000
Academic Press.