Three-dimensional acoustic display systems have recently been develope
d that synthesize virtual sound sources over headphones based on filte
ring by head-related transfer functions (HRTFs), the direction-depende
nt spectral changes caused primarily by the pinnae. In this study 11 i
nexperienced subjects judged the apparent spatial location of headphon
e-presented speech stimuli filtered with non-individualized HRTFs. Abo
ut half of the subjects ''pulled'' their judgments toward either the m
edian or the lateral-vertical planes, and estimates were almost always
elevated. Individual differences were pronounced for the distance jud
gments; 15% to 46% of stimuli were heard inside the head, with the sho
rtest estimates near the median plane. The results suggest that most l
isteners can obtain useful azimuth information from speech stimuli fil
tered by nonindividualized HRTFs. Measurements of localization error a
nd reversal rates are comparable with a previous study that used broad
band noise stimuli.