In this paper, we present two major parts of an interface for American
Sign Language (ASL) to computer applications currently under work; a
hand tracker and an ASL-parser. The hand tracker extracts information
about handshape, position and motion from image sequences. As an aid i
n this process, the signer wears a pair of gloves with colour-coded ma
rkers on joints and finger tips. We also present a computational model
of American Sign Language. This model is realized in an ASL-parser wh
ich consists of a DCG-grammar and a non-lexical component that records
non-manual and spatial information over an ASL-discourse.