We present an off-line variant of the mistake-bound model of learning.
This is an intermediate model between the on-line learning model (Lit
tlestone, 1988, Littlestone, 1989) and the self-directed learning mode
l (Goldman, Rivest & Schapire, 1993, Goldman & Sloan, 1994). Just like
in the other two models, a learner in the off-line model has to learn
an unknown concept from a sequence of elements of the instance space
on which it makes ''guess and test'' trials. In all models, the aim of
the learner is to make as few mistakes as possible. The difference be
tween the models is that, while in the on-line model only the set of p
ossible elements is known, in the off-line model the sequence of eleme
nts (i.e., the identity of the elements as well as the order in which
they are to be presented) is known to the learner in advance. On the o
ther band, the learner is weaker than the self-directed ]earner, which
is allowed to choose adaptively the sequence of elements presented to
him. We study some of the fundamental properties of the off-line mode
l. In particular, we compare the number of mistakes made by the off-li
ne learner on certain concept classes to those made by the on-line and
self-directed learners. We give bounds on the possible gaps between t
he various models and show examples that prove that our bounds are tig
ht. Another contribution of this paper is the extension of the combina
torial tool of labeled trees to a unified approach that captures the v
arious mistake bound measures of all the models discussed. We believe
that this tool will prove to be useful for further study of models of
incremental learning.