The structure of computer-aided conversations obtained with a prototyp
e system designed for use by physically handicapped, non-speaking peop
le was investigated. The conversational aid requires speech acts suita
ble for use in a conversation on a fairly broad topic (e.g. holidays)
to be generated by the computer user ahead of time. The potential spee
ch acts are then organized to facilitate rapid selection of appropriat
e items for output via a voice synthesizer during subsequent conversat
ions. Lag-sequential analyses were used to identify sequential depende
ncies between the speech acts of computer-aided and unaided participan
ts. The obtained sequential dependencies were broadly similar to those
found for normal conversations on the same topic. Where differences e
xisted, they were readily explicable in terms of conversational goals
appropriate to the different modes of discourse. The results of the di
scourse analyses were interpreted as evidence for the coherence of the
computer-aided dialogues at the level of speech acts. It was suggeste
d that the sequential dependencies found between speech acts, and betw
een categories within alternative taxonomies, might be used to reduce
switching pause times in computer-aided conversations by building them
into the system as predictions.