This paper describes new and efficient algorithms for learning determi
nistic finite automata. Our approach is primarily distinguished by two
features: (1) the adoption of an average-case setting to model the ''
typical'' labeling of a finite automaton, while retaining a worst-case
model for the underlying graph of the automaton, along with (2) a lea
rning model in which the learner is not provided with the means to exp
eriment with the machine, but rather must learn solely by observing th
e automaton's output behavior on a random input sequence. The main con
tribution of this paper is in presenting the first efficient algorithm
s for learning nontrivial classes of automata in an entirely passive l
earning model. We adopt an on-line learning model in which the learner
is asked to predict the output of the next state, given the next symb
ol of the random input sequence; the goal of the learner is to make as
few prediction mistakes as possible. Assuming the learner has a means
of resetting the target machine to a fixed start state, we first pres
ent an efficient algorithm that makes an expected polynomial number of
mistakes in this model. Next, we show how this first algorithm can be
used as a subroutine by a second algorithm that also makes a polynomi
al number of mistakes even in the absence of a reset. Along the way, w
e prove a number of combinatorial results for randomly labeled automat
a. We also show that the labeling of the states and the bits of the in
put sequence need not be truly random, but merely semi-random. Finally
, we discuss an extension of our results to a model in which automata
are used to represent distributions over binary strings. (C) 1997 Acad
emic Press.