Current parallelizing compilers cannot identify a significant fraction
of parallelizable loops because they have complex or statically insuf
ficiently defined access patterns. This type of loop mostly occurs in
irregular, dynamic applications which represent more than 50% of all a
pplications [K. Kennedy, Compiler technology for machine-independent p
rogramming, Int. J. Paral. Prog. 22 (1) (1994) 79-98]. Making parallel
computing succeed has therefore become conditioned by the ability of
compilers to analyze and extract the parallelism from irregular applic
ations. In this paper we present a survey of techniques that can compl
ement the current compiler capabilities by performing some form of dat
a dependence analysis during program execution, when all information i
s available. After describing the problem of loop parallelization and
its difficulties, a general overview of the need for techniques of run
-time parallelization is given. A survey of the various approaches to
parallelizing partially parallel loops and fully parallel loops is pre
sented. Special emphasis is placed on two parallelism enabling transfo
rmations, privatization and reduction parallelization, because of thei
r proven efficiency. The technique of speculatively parallelizing doal
l loops is presented in more detail. This survey limits itself to the
domain of Fortran applications parallelized mostly in the shared mory
paradigm. Related work from the field of parallel debugging and parall
el simulation is also described. (C) 1998 Elsevier Science B.V. All ri
ghts reserved.