F. Luccio et L. Pagli, COMPUTING WITH TIME-VARYING DATA - SEQUENTIAL COMPLEXITY AND PARALLELSPEED-UP, theory of computing systems, 31(1), 1998, pp. 5-26
Citations number
18
Categorie Soggetti
System Science",Mathematics,"Computer Science Theory & Methods",Mathematics
We discuss the design of sequential and parallel algorithms working on
a time-increasing data set, within two paradigms of computation. In P
aradigm 1 the process terminates when all the data currently arrived h
ave been treated, independently of future arrivals. In Paradigm 2 an e
ndless process is divided in stages, and in each stage the computation
is carried out on the data set updated up to the previous stage, A pr
oblem may be unsolvable because no algorithm is fast enough to cope wi
th the increasing data set. The computational cost of succeeding algor
ithms is studied in a new perspective, in the sequential RAM and paral
lel PRAM models, with the running time possibly tending to infinity fo
r proper values of the parameters. It is shown that the traditional ti
me bounds of parallel versus sequential computation (i.e., speed-up an
d sl ow-down under the so-called Brent's principle) do not hold, and n
ew bounds are provided; Several problems are examined in the new parad
igms, and the new algorithms are compared with the known ones designed
for time-invariant data. Optimal sequential and parallel algorithms a
re also defined, and given whenever possible. In particular it is show
n that some problems do not gain anything from a parallel solution, wh
ile others can be practically solved only in parallel. Paradigm 1 is t
he most innovative, and the relative results on parallel speed-up and
scaling are probably unexpected. Paradigm 2 opens a new perspective in
dynamic algorithms, because processing batches of data may be more ef
ficient than processing single incoming data on-line.