We examined how speakers of different languages perceive speech in fac
e-to-face communication. These speakers identified synthetic unimodal
and bimodal speech syllables made from synthetic auditory and visual f
ive-step/ba/-/da/ continua. In the first experiment, Dutch speakers id
entified the test syllables as either/ba/ or /da/. To explore the robu
stness of the results, Dutch and English speakers were given a complet
ely open-ended response task. Tasks in previous studies had always spe
cified a set of alternatives. Similar results were found in the two-al
ternative and open-ended task. Identification of the speech segments w
as influenced by both the auditory and the visual sources of informati
on. The results falsified an auditory dominance model (ADM) which assu
mes that the contribution of visible speech is dependent on poor-quali
ty audible speech. The results also falsified an additive model of per
ception (AMP) in which the auditory and visual sources are linearly co
mbined. The fuzzy logical model of perception (FLMP) provided a good d
escription of performance, supporting the claim that multiple sources
of continuous information are evaluated and integrated in speech perce
ption. These results replicate previous results found with English, Sp
anish, and Japanese speakers. Although there were significant performa
nce differences, the model analyses indicated no differences in the na
ture of information processing across language groups. The performance
differences across languages were caused by information differences d
ue to different phonologies in Dutch and English. These results sugges
t that the underlying mechanisms for speech perception are similar acr
oss languages.