This work presents a survey of the capabilities that the sparse computation
offers for improving performance when parallelized, either automatically o
r through a data-parallel compiler. The characterization of a sparse code g
ets more complicated as code length increases: Access patterns change from
loop to loop, thus making necessary to redefine the parallelization strateg
y. While dense computation solely offers the possibility of redistributing
data structures, several other factors influence the performance of a code
excerpt in the sparse field, like source data representation on file, compr
essed data storage in memory, the creation of new nonzeroes at run-time (fi
ll-in) or the number of processors available. We analize the alternatives t
hat arise from each issue, providing a guideline for the underlying compila
tion work and illustrating our techniques with examples on the Cray T3E.