We investigate the average-case speed and scalability of parallel algorithm
s executing on multiprocessors. Our performance metrics are average-speed a
nd isospeed scalability. For the purpose of average-case performance predic
tion, we formally define the concepts of average-case average-speed and ave
rage-case isospeed scalability. By modeling parallel algorithms on multipro
cessors using task precedence graphs, we are mainly interested in the effec
ts of synchronization overhead and load imbalance on the performance of par
allel computations. Thus, we focus on the structures of parallel computatio
ns, whose inherent sequential parts are limitations to high performance. Ta
sk execution times are treated as random variables, so that we can analyze
the average-case performance of parallel computations. For several typical
classes of task graphs, including iterative computations, search trees, par
titioning algorithms, and diamond dags, we derive the growth rate of the nu
mber of tasks as well as isospeed scalability in keeping constant average-s
peed. In addition to analytical results, extensive numerical data are also
demonstrated. An important discovery of our study is that while a parallel
computation can be made scalable by increasing the problem size together wi
th the system size, it is actually the amount of parallelism that should sc
ale up with the system size. We also argue that under our probabilistic mod
el of parallel algorithms and the list scheduling strategy, the number of t
asks should grow at least at the rate of -(P log P), where P is the number
of processors, so that a constant average-speed can be maintained. Furtherm
ore, -(log P / log P') is the highest isospeed scalability achievable.