SAVI: an actively controlled teleconferencing system

Citation
R. Herpers et al., SAVI: an actively controlled teleconferencing system, IMAGE VIS C, 19(11), 2001, pp. 793-804
Citations number
23
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IMAGE AND VISION COMPUTING
ISSN journal
02628856 → ACNP
Volume
19
Issue
11
Year of publication
2001
Pages
793 - 804
Database
ISI
SICI code
0262-8856(20010901)19:11<793:SAACTS>2.0.ZU;2-I
Abstract
A Stereo Active Vision Interface (SAVI) is introduced which detects frontal faces in real world environments and performs particular active control ta sks dependent on hand gestures given by the person the system attends to. T he SAVI system is thought of as a smart user interface for teleconferencing , telemedicine, and distance learning applications. To reduce the search space in the visual scene the processing is started wi th the detection of connected skin colour regions applying a new radial sca nline algorithm. Subsequently, in the most salient skin colour region facia l features are searched for while the skin colour blob is actively kept in the centre of the visual field of the camera system. After a successful eva luation of the facial features the associated person is able to give contro l commands to the system. For this contribution only visual control command s are investigated but there is no limitation for voice or any other comman ds. These control commands can either effect the observing system itself or any other active or robotic system wired to the principle observing system via TCP/IP sockets. The system is designed as a perception-action-cycle (PAC), processing senso ry data of different kinds and qualities. Both the vision module and the he ad motion control module work at frame rate on a PC platform. Hence, the sy stem is able to react instantaneously to changing conditions in the visual scene. (C) 2001 Elsevier Science B.V. All rights reserved.