This paper describes COLTHPF, a run-time support specifically designed for
the co-ordination of concurrent and communicating HPF tasks. COLTHPF is imp
lemented on top of MPI and requires only small changes to the run-time supp
ort of the HPF compiler used. Although the COLTHPF API can be used directly
by programmers to write applications as a hat collection of interacting da
ta-parallel tasks, we believe that it can be used more productively through
a compiler of a simple high-level co-ordination language which facilitates
programmers in structuring a set of data-parallel HPF tasks according to c
ommon forms of task-parallelism. The paper outlines design and implementati
on issues, and discusses the main differences from other approaches to expl
oiting task parallelism in the HPF framework. We show how COLTHPF can be us
ed to implement common forms of parallelism, e,g, pipeline and processor fa
rms, and we present experimental results regarding both synthetic micro-ben
chmarks and sample applications. The experiments were conducted on an SGI/C
ray T3E using Adaptor, a public domain HPF compiler. Copyright (C) 1999 Joh
n Wiley & Sons, Ltd.