The standard message-passing interface (MPI) is used to parallelize Neu mar
k's method The linens matrix equation encountered at each time step is solv
ed using a preconditioned conjugate gradient algorithm. Data are distribute
d over the processors of a given parallel computer on a degree-of-freedom b
asis; this produces effective load balance between the processors and leads
to a highly parallelized code. The portability of the implementation of th
is scheme is tested by solving some simple problems on two different machin
es: an SGI Origin2000 and an IBM SP2. The measured times demonstrate the ef
ficiency of the approach and highlight the maintenance advantages that aris
e from using a standard parallel library such as MPI.