In this paper, we propose a new virtual cache architecture that reduce
s memory latency by encompassing the merits of both direct-mapped cach
e and set-associative cache, The entire cache memory is divided into n
banks, and the operating system assigns one of the banks to a process
when it is created. Then, each process runs on the assigned bank, and
the bank behaves like in a direct-mapped cache. If a cache miss occur
s in the active home bank, then the data will be fetched either from o
ther banks or from the main memory like a set-associative cache. A vic
tim for cache replacement is selected from those that belong to a proc
ess which is most remote from being scheduled. Trace-driven simulation
s confirm that the new scheme removes almost as many conflict misses a
s does the set-associative cache, while cache access time is similar t
o a direct-mapped cache.