The amount of memory required by a parallel program may be spectacular
ly larger then the memory required by an equivalent sequential program
, particularly for programs that use recursion extensively. Since most
parallel programs are nondeterministic in behavior, even when computi
ng a deterministic result, parallel memory requirements may vary from
run to run, even with the same data. Hence, parallel memory requiremen
ts may be both large (relative to memory requirements of an equivalent
sequential program) and unpredictable. Assume that each parallel prog
ram has an underlying sequential execution order that may be used as a
basis for predicting parallel memory requirements. We propose a simpl
e restriction that is sufficient to ensure that any program that will
run in n units of memory sequentially can run in mn units of memory on
m processors, using a scheduling algorithm that is always within a fa
ctor of two of being optimal with respect to time. Any program can be
transformed into one that satisfies the restriction, but some potentia
l parallelism may be lost in the transformation. Alternatively, it is
possible to define a parallel programming language in which only progr
ams satisfying the restriction can be written.