This paper studies the relationship between storage requirements and p
erformance. Storage-related dependences inhibit optimizations for loca
lity and parallelism. Techniques such as renaming and array expansion
can eliminate all storage-related dependences, but do so at the expens
e of increased storage. This paper introduces the universal occupancy
vector (UOV) for loops with a regular stencil of dependences. The UOV
provides a schedule-independent storage reuse pattern that introduces
no further dependences (other than those implied by true flow dependen
ces). OV-mapped code requires less storage than full array expansion a
nd only slightly more storage than schedule-dependent minimal storage.
We show that determine if a vector is a UOV is NP-complete. However,
an easily constructed but possibly nonminimal UOV can be used. We also
present a branch and bound algorithm which finds the minimal UOV, whi
le still maintaining a legal UOV at all times. Our experimental result
s show that the use of OV-mapped storage, coupled with tiling for loca
lity, achieves better performance than tiling after array expansion, a
nd accommodates larger problem sizes than untilable, storage-optimized
code. Furthermore, storage mapping based on the UOV introduces neglig
ible runtime overhead.