Latent Semantic Analysis (LSA) is a theory and me:hod for extracting a
nd representing the contextual-usage meaning of words by statistical c
omputations applied to a large corpus of text (Landauer & Dumais, 1997
). The underlying idea is that the aggregate of all the word contexts
in which a given word does and does not appear provides a set of mutua
l constraints that largely determines the similarity of meaning of wor
ds and sets of words to each other. The adequacy of LSA's reflection o
f human knowledge has been established in a variety of ways. For examp
le, its scores overlap those of humans on standard vocabulary and subj
ect matter tests; it mimics human word sorting and category judgments;
it simulates word-word and passage-word lexical priming data; and, as
reported in 3 following articles in this issue, it accurately estimat
es passage coherence, learnability of passages by individual students,
and the quality and quantity of knowledge contained in an essay.