Most of today's virtual environments are populated with some kind of autono
mous life-like agents. Such agents follow a preprogrammed sequence of behav
iors that excludes the user as a participating entity in the virtual societ
y. In older to make inhabited virtual reality an attractive place for infor
mation exchange and social interaction, we need to equip the autonomous age
nts with some perception and interpretation skills. In this paper we presen
t one skill: human action recognition. By opposition to human-computer inte
rfaces that focus on speech or hand gestures, we propose a full-body integr
ation of the user. We present a model of human actions along with a real-ti
me recognition system. To cover the bilateral aspect in human-computer inte
rfaces, we also discuss some action response issues. In particular, we desc
ribe a motion management library that solves animation continuity and mixin
g problems. Finally, we illustrate our system with two examples and discuss
what we have learned.