Asynchronous pipelining is a form of parallelism that is useful in bot
h distributed and shared memory systems. We show that asynchronous pip
eline schedules are a generalization of both noniterative DAG (directe
d acyclic graph) schedules as well as simpler pipeline schedules, unif
ying these two types of scheduling. We generalize previous work on det
ermining if a pipeline schedule will deadlock, and generalize Reiter's
well-known formula((1)) for determining the iteration interval of a d
eadlock-free schedule, which is the primary measure of the execution t
ime of a schedule. Our generalizations account for nonzero communicati
on times (easy) and the assignment of multiple tasks to processors (no
ntrivial). A key component of our generalized approach to pipeline sch
edule analysis is the use of pipeline scheduling edges with potentiall
y negative data dependence distances. We also discuss implementation o
f an asynchronous pipeline schedule at runtime; show how to efficientl
y simulate pipeline execution on a sequential processor; define and de
rive bounds on the startup time of a schedule, which is a secondary sc
hedule performance measure; and describe a nsw algorithm for evaluatin
g the iteration interval formula.