J. Xu et al., GESTURE DESCRIPTION AND STRUCTURE OF A DICTIONARY FOR INTELLIGENT COMMUNICATION OF SIGN LANGUAGE IMAGES, Electronics and communications in Japan. Part 3, Fundamental electronic science, 77(3), 1994, pp. 62-74
In the intelligent transmission of the sign language image based on th
e semantic information, one possible method is to transform the semant
ic information on the receiving side into the motion parameters of the
upper extremities and to synthesize the animation image of the sign l
anguage. For this purpose, the word dictionary to describe the motion
of the upper limb corresponding to the semantic information must be co
nstructed. A problem then is that the human upper limb has a large deg
ree of freedom. It is difficult to control skillfully the posture of t
he upper limb using the motion parameters for the joint movements. Thu
s, the complex motion is realized by combining the posture and motion
of the upper limb, corresponding to the sign language. This paper prop
oses a method in which the shape of the hand to be used in the sign la
nguage is described by a table and the motion of the upper limb is des
cribed and represented by the motion features of the gesture based on
the joint angle model. The method to construct two kinds of word dicti
onaries, called ''representative dictionary'' and ''realizing dictiona
ry,'' are proposed to generate the animation image for the sign langua
ge based on the semantic information. The representative dictionary gi
ves the basic motion of the sign language. The realizing dictionary de
scribes the control parameters such as the iteration and the speed of
motion. The word in sign language is realized by adding the realizing
form to the representative form. in the proposed method, the descripti
on of the word in the signal language is simplified, which makes it ea
sy to control the upper limb posture as well as to iterate or emphasiz
e the motion. using the method proposed in this paper, a word dictiona
ry is constructed to describe the gesture of the sign language. A syst
em is discussed in detail which transforms the input sentence in Japan
ese into the animation image.