A new algorithm for learning one-variable pattern languages from positive d
ata is proposed and analyzed with respect to its average-case behavior. We
consider the total learning time that takes into account all operations til
l convergence to a correct hypothesis is achieved. For almost all meaningfu
l distributions defining how the pattern variable is replaced by a string t
o generate random examples of the target pattern language. it is shown that
this algorithm coverages within an expected constant number of rounds and
a total learning time that is linear in the pattern length. Thus, our solut
ion is average-case optimal in a strong sense. Though one-variable pattern
languages can neither be finitely inferred from positive data nor PAC-learn
ed. our approach can be extended to a probabilistic finite learner that exa
ctly infers all one-variable pattern languages from positive data with high
confidence. It is a long-standing open problem whether or not pattern lang
uages can be learned in cases that empty substitutions for the pattern vari
ables are also allowed. Our learning strategy can be generalized to this si
tuation as well. finally. we show some experimental results for the behavio
r of this new learning algorithm in practice. (C) 2000 Academic Press.