Iteration space tiling is a common strategy used by parallelizing comp
ilers and in performance tuning of parallel codes. We address the prob
lem of determining the tile size that minimizes the total execution ti
me. We restrict our attention to uniform dependency computations with
two-dimensional, parallelogram-shaped iteration domain which can be ti
led with lines parallel to the domain boundaries. The target architect
ure is a linear array (or a ring). Our model is developed in two steps
, We first abstract each tile by two simple parameters, namely tile pe
riod P-t and intertile latency L-t, We formulate and partially resolve
the corresponding optimization problem independent of the machine and
program, Next, we refine the model with realistic machine and program
parameters, yielding a discrete nonlinear optimization problem. We so
lve this analytically, yielding a closed form solution, which can be u
sed by a compiler before code generation. (C) 1997 Academic Press.