We investigate the claim that functional languages offer low-cost paralleli
sm in the context of symbolic programs on modest parallel architectures. In
our investigation we present the first comparative study of the constructi
on of large applications in a parallel functional language, in our case in
Glasgow Parallel Haskell (GPH). The applications cover a range of applicati
on areas, use several parallel programming paradigms, and are measured on t
wo very different parallel architectures.
On the applications level the most significant result is that we are able t
o achieve modest wall-clock speedups (between factors of 2 and 10) over the
optimised sequential versions for all but one of the programs. Speedups ar
e obtained even for programs that were not written with the intention of be
ing parallelised, These gains are achieved with a relatively small programm
er-effort. One reason for the relative ease of parallelisation is the use o
f evaluation strategies, a new parallel programming technique that separate
s the algorithm from the co-ordination of parallel behaviour.
On the language level we show that the combination of lazy and parallel eva
luation is useful for achieving a high level of abstraction. In particular
we can describe top-level parallelism, and also preserve module abstraction
by describing parallelism over the data structures provided at the module
interface ('data-oriented parallelism'). Furthermore, we find that the dete
rminism of the language is helpful, as is the largely implicit nature of pa
rallelism in GPH. Copyright (C) 1999 John Wiley & Sons, Ltd.