We describe an ACT-R model for sentence memory that extracts both a parsed
surface representation and a propositional representation. In addition, if
possible for each sentence, pointers are added to a long-term memory refere
nt which reflects past experience with the situation described in the sente
nce, This system accounts for basic results in sentence memory without assu
ming different retention functions for surface, propositional, or situation
al information. There is better retention for gist than for surface informa
tion because of the greater complexity of the surface representation and be
cause of the greater practice of the referent for the sentence. This model'
s only inference during sentence comprehension is to insert a pointer to an
existing referent. Nonetheless, by this means it is capable of modeling ma
ny effects attributed to inferential processing. The ACT-R architecture als
o provides a mechanism for mixing the various memory strategies that partic
ipants bring to bear in these experiments. (C) 2001 Academic Press.