Morphable cache architectures: Potential benefits

Citation
I. Kadayif et al., Morphable cache architectures: Potential benefits, ACM SIGPL N, 36(8), 2001, pp. 128-137
Citations number
28
Categorie Soggetti
Computer Science & Engineering
Journal title
ACM SIGPLAN NOTICES
ISSN journal
15232867 → ACNP
Volume
36
Issue
8
Year of publication
2001
Pages
128 - 137
Database
ISI
SICI code
1523-2867(200108)36:8<128:MCAPB>2.0.ZU;2-S
Abstract
Computer architects have tried to mitigate the consequences of high memory latencies using a variety techniques. An example of these techniques is mul ti-level caches to counteract the latency that results from having a memory that is slower than the processor. Recent research has demonstrated that c ompiler optimizations that modify data layouts and restructure computation can be successful in improving memory system performance. However, in many cases, working with a fixed cache configuration prevents the application/co mpiler from obtaining the maximum performance. In addition, prompted by dem and in portability, long battery life, and law-cost packaging, the computer industry has started viewing energy and power as decisive design factors, along with performance and cost. This makes the job of the compiler/user ev en more difficult as one needs to strike a balance between law power/energy consumption and high performance. Consequently, adapting the code to the u nderlying cache/memory hierarchy is becoming more and more difficult. In this paper, we take an alternate approach and attempt to adapt the cache architecture to the software needs. We focus on array-dominated applicatio ns and measure the potential benefits that could be gained from a morphable (reconfigurable) cache architecture. Our results show that not only differ ent applications work best with different cache configurations, but also th at different loop nests in a given application demand different configurati ons. Our results also indicate that the most suitable cache configuration f or a given application or a single nest depends strongly on the objective f unction being optimized. For example, minimizing cache memory energy requir es a different cache configuration for each nest than an objective which tr ies to minimize the overall memory system energy. Based on our experiments, we conclude that fine-grain (loop nest-level) cache configuration manageme nt is an important step for a solution to the challenging architecture/soft ware tradeoffs awaiting system designers in the future.