Many high-level parallel programming languages allow for fine-grained paral
lelism. As in the popular work-time framework for parallel algorithm design
, programs written in such languages can express the full parallelism in th
e program without specifying the mapping of program tasks to processors. A
common concern in executing such programs is to schedule tasks to processor
s dynamically so as to minimize not only the execution time, but also the a
mount of space (memory) needed. Without careful scheduling, the parallel ex
ecution on p processors can use a factor of p or larger more space than a s
equential implementation of the same program.
This paper first identifies a class of parallel schedules that are provably
efficient in both time and space. For any computation with w units of work
and critical path length d, and for any sequential schedule that takes spa
ce s(1), we provide a parallel schedule that lakes fewer than w/p + d steps
on p processors and requires less than s(1) + p.d space. This matches the
lower bound that we show, and significantly improves upon the best previous
bound of s(1).p space for the common case where d much less than s(1)
The paper then describes a scheduler for implementing high-level languages
with nested parallelism, that generates schedules in this class. During pro
gram execution, as the structure of the computation is revealed, the schedu
ler keeps track of the active tasks, allocates the tasks to the processors,
and performs the necessary task synchronization. The scheduler is itself a
parallel algorithm, and incurs at most a constant factor overhead in time
and space, even when the scheduling granularity is individual units of work
. The algorithm is the first efficient solution to the scheduling problem d
iscussed here, even if space considerations are ignored.