Model reduction is an area of fundamental importance in many modeling and c
ontrol applications. In this paper we analyze the use of parallel computing
in model reduction methods based on balanced truncation of large-scale den
se systems. The methods require the computation of the Gramians of a linear
-time invariant system. Using a sign function-based solver for computing fa
ll-rank factors of the Gramians yields some favorable computational aspects
in the subsequent computation of the reduced-order model, particularly for
non-minimal systems. As sign function-based computations only require effi
cient implementations of basic linear algebra operations readily available,
e.g., in the BLAS, LAPACK, and ScaLAPACK, good performance of the resultin
g algorithms on parallel computers is to be expected. Our experimental resu
lts on a PC cluster show the performance and scalability of the parallel im
plementation.