This paper presents a model-based, unsupervised algorithm for recovering wo
rd boundaries in a natural-language text from which they have been deleted.
The algorithm is derived from a probability model of the source that gener
ated the text. The fundamental structure of the model is specified abstract
ly so that the detailed component models of phonology, word-order, and word
frequency can be replaced in a modular fashion. The model yields a languag
e-independent, prior probability distribution on all possible sequences of
all possible words over a given alphabet, based on the assumption that the
input was generated by concatenating words from a fixed but unknown lexicon
. The model is unusual in that it treats the generation of a complete corpu
s, regardless of length, as a single event in the probability space. Accord
ingly, the algorithm does not estimate a probability distribution on words;
instead, it attempts to calculate the prior probabilities of various word
sequences that could underlie the observed text. Experiments on phonemic tr
anscripts of spontaneous speech by parents to young children suggest that o
ur algorithm is more effective than other proposed algorithms, at least whe
n utterance boundaries are given and the text includes a substantial number
of short utterances.