V-P CACHE - A STORAGE EFFICIENT VIRTUAL CACHE ORGANIZATION

Citation
Sl. Min et al., V-P CACHE - A STORAGE EFFICIENT VIRTUAL CACHE ORGANIZATION, Microprocessors and microsystems, 17(9), 1993, pp. 537-546
Citations number
23
Categorie Soggetti
Computer Sciences","Engineering, Eletrical & Electronic","Computer Applications & Cybernetics
ISSN journal
01419331
Volume
17
Issue
9
Year of publication
1993
Pages
537 - 546
Database
ISI
SICI code
0141-9331(1993)17:9<537:VC-ASE>2.0.ZU;2-M
Abstract
In high-performance processors, providing a fast cache hit time is one of the most important design issues, if not the most important one, s ince the cache hit time is one of the key determinants of the processo r cycle time. Direct-mapped virtual caches would be a nice match with such high-speed processors since they have the potential for a very fa st hit time. Their fast hit time comes mainly from two sources: (1) th ey do not require a preceding TLB access on a cache hit since they are accessed by virtual addresses; (2) they do not suffer from delays due to additional comparators and multiplexers that would otherwise be pr esent in the set-associative caches. However, their hit time advantage does not come without drawbacks. Being virtual caches, they require a n anti-aliasing scheme (either in hardware or software) to solve the w ell-known synonym problem and, being direct-mapped caches, they yield higher miss ratios than set-associative caches of comparable size. Thi s paper proposes a novel cache organization called V-P cache that impr oves the miss ratios of direct-mapped virtual caches. The key to the p roposed scheme is the use of the physical address to reaccess the cach e when the cache access based on the virtual address is a miss. Theref ore, a given memory block can be placed in two different sets, one bas ed on the virtual address and the other based on the physical address. By providing the benefit of two-way set-associative caches, the propo sed scheme can eliminate many misses due to conflicts among frequently used blocks that happen to be mapped to the same set. Another importa nt benefit of the proposed scheme is that it reduces the so-called ant i-aliasing misses that result from the accesses to the blocks previous ly evicted from the cache for anti-aliasing purposes. A quantitative e valuation based on trace-driven simulations using ATUM traces reveals that conventional direct-mapped virtual caches utilize only 50% of tot al cache blocks due to replacements of cache blocks for anti-aliasing purposes. The proposed V-P cache, on the other hand, is shown to utili ze more than 80% of the total cache blocks. The results also show that this improved cache storage utilization yields miss ratio improvement s of up to 25%.