The left-right signal correspondence problem, that is considered as on
e of the most prominent problems by visual stereoscopic computational
models, is much ignored by computational auditory stereophonic models.
The correspondence problem, which is trivial if only one acoustic sou
rce is present, is highly complicated for a multiple sources environme
nt. We present a computational model able to perform localization of n
atural complex acoustic signals (one or two human speakers). The model
relies mainly on computing the cross-correlation functions of selecte
d frequency channels arriving at the two ears, and performing a weight
ed integration on these functions. Thus, first attempts are made to es
tablish a correspondence between acoustic features of the two channels
. Preliminary results show that this model, which might be compared to
''early vision'' models in computational vision research, can serve a
s a first step in analyzing the acoustic scene.