This paper describes a fully implemented, broad-coverage model of human syn
tactic processing. The model uses probabilistic parsing techniques, which c
ombine phrase structure. lexical category, and limited subcategory probabil
ities with an incremental, left-to-right "pruning" mechanism based on casca
ded Markov models, The parameters of the system are established through a u
niform training algorithm, which determines maximum-likelihood estimates fr
om a parsed corpus. The probabilistic parsing mechanism enables the system
to achieve good accuracy on typical, "garden-variety " language (i.e., when
tested on corpora). Furthermore, the incremental probabilistic ranking of
the pre ferred analyses during parsing also naturally explains observed hum
an behavior for a range of garden-path structures. We do not make strong ps
ychological claims about the specific probabilistic mechanism discussed her
e, which is limited by a number of practical considerations. Rather, we arg
ue incremental probabilistic parsing models are, in general, extremely well
suited to explaining this dual nature-generally good and occasionally path
ological-of human linguistic performance.