This article studies self-directed learning, a variant of the on-line
(or incremental) learning model in which the learner selects the prese
ntation order for the instances. Alternatively, one can view this mode
l as a variation of learning with membership queries in which the lear
ner is only ''charged'' for membership queries for which it could not
predict the outcome. We give tight bounds on the complexity of self-di
rected learning for the concept classes of monomials, monotone DNF for
mulas, and axis-parallel rectangles in {0, 1, . . ., n - 1}d. These re
sults demonstrate that the number of mistakes under self-directed lear
ning can be surprisingly small. We then show that learning complexity
in the model of self-directed learning is less than that of all other
commonly studied on-line and query learning models. Next we explore th
e relationship between the complexity of self-directed learning and th
e Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC
-dimension and the self-directed learning complexity are incomparable.
However, for some special cases, we show that the VC-dimension gives
a lower bound for the self-directed learning complexity. Finally, we e
xplore a relationship between Mitchell's version space algorithm and t
he existence of self-directed learning algorithms that make few mistak
es.