Constructing qualitative event models automatically from video input

Citation
J. Fernyhough et al., Constructing qualitative event models automatically from video input, IMAGE VIS C, 18(2), 2000, pp. 81-103
Citations number
37
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IMAGE AND VISION COMPUTING
ISSN journal
02628856 → ACNP
Volume
18
Issue
2
Year of publication
2000
Pages
81 - 103
Database
ISI
SICI code
0262-8856(200001)18:2<81:CQEMAF>2.0.ZU;2-V
Abstract
We describe an implemented technique for generating event models automatica lly based on qualitative reasoning and a statistical analysis of video inpu t. Using an existing tracking program which generates labelled contours for objects in every frame, the view from a fixed camera is partitioned into s emantically relevant regions based on the paths followed by moving objects. The paths are indexed with temporal information so objects moving along th e same path at different speeds can be distinguished. Using a notion of pro ximity based on the speed of the moving objects and qualitative spatial rea soning techniques, event models describing the behaviour of pairs of object s can be built, again using statistical methods. The system has been tested on a traffic domain and learns various event models expressed in the quali tative calculus which represent human observable events. The system can the n be used to recognise subsequent selected event occurrences or unusual beh aviours. (C) 2000 Elsevier Science B.V. All rights reserved.