In many cases, data networks need to be monitored to ensure that they stay
within acceptable parameters. The monitoring consists of measuring properti
es of the network, and of inferring an aggregate predicate from these measu
rements.
In many cases it is too complex, or too expensive, to conduct explicit moni
toring at all times. In these cases, information (integrity constraints) on
the evolution of the network status can often allow us to use past measure
ments to infer the future behavior, thus reducing the monitoring cost.
We provide a formal description of the problem of monitoring rapidly changi
ng data, which we call the monitoring problem. We then classify this proble
m in terms of the integrity constraints that govern the evolution of the en
vironment, and propose different algorithms for each of these classes. For
the most restricted case, me can find a greedy algorithm which is optimal,
while for the more general cases, we use competitive analysis and show that
optimal worst and average case cost measuring algorithms exist. We then pr
esent heuristics for low-cost low-complexity measuring algorithms. We belie
ve that the results of this paper can serve as a framework for further stud
ies.