The Multimodal User Supervised Interface and Intelligent Control (MUSI
IC) project focuses on a multimodal human-machine interface which addr
esses user need to manipulate familiar objects in an unstructured envi
ronment. The control of a robot by individuals with significant physic
al limitations presents a challenging problem of telemanipulation. Thi
s is addressed by a unique user-interface integrating the user's comma
nd (speech) and gestures (pointing) with autonomous planning technique
s (knowledge-bases and 3-D vision). The resultant test-bed offers the
opportunity to study telemanipulation by individuals with physical dis
abilities, and can be generalized to an effective technique for other,
including remote and time-delayed, telemanipulation. This paper focus
es on the knowledge-driven planning mechanism that is central to the M
USIIC system.