Models of spoken word recognition vary in the ways in which they capture th
e relationship between speech input and meaning. Modular accounts prohibit
a word's meaning from affecting the computation of its form-based represent
ation, whereas interactive models allow activation at the semantic level to
affect phonological processing. We tested these competing hypotheses by ma
nipulating word fa miliarity and imageability, using lexical decision and r
epetition tasks. Responses to high-imageability words were significantly fa
ster than those to low-imageability words. Repetition latencies were also a
nalyzed as a function of cohort variables, revealing a significant imageabi
lity effect only for words that were members of large cohorts, suggesting t
hat when the mapping from phonology to semantics is difficult, semantic inf
ormation can help the discrimination process. Thus, these data support inte
ractive models of spoken word recognition.