Brent's scheduling principle provides a general simulation scheme when
fewer processors are available than specified by the fastest parallel
algorithm. Such a scheme preserves, under slow-down, the actual numbe
r of executed operations, also called work. In this paper we take the
complementary viewpoint, and rather than consider the work-preserving
slow-down of some fast parallel algorithm, we investigate the problem
of the achievable speedups of computation while preserving the work of
the best-known sequential algorithm for the same problem. The propose
d technique, eminently applicable to problems of matrix-computational
flavor, achieves its result through the interplay of two algorithms wi
th significantly different features. Analogous but structurally differ
ent ''interplays'' have been used previously to improve the algorithmi
c efficiency of graph computations, selection, and list ranking. We de
monstrate the efficacy of our technique for the computation of path al
gebras in graphs and digraphs and various fundamental computations in
linear algebra. Some of the fundamental new algorithms may have practi
cal value; for instance, we substantially improve the algorithmic perf
ormance of the parallel solution of triangular and Toeplitz linear sys
tems of equations and the computation of the transitive closure of dig
raphs.