M. Oguchi et al., A PROPOSITION AND EVALUATION OF DSM MODELS SUITABLE FOR A WIDE AREA DISTRIBUTED ENVIRONMENT REALIZED ON HIGH-PERFORMANCE NETWORKS, IEICE transactions on communications, E79B(2), 1996, pp. 153-162
Distributed shared memory is an attractive option for realizing functi
onally distributed computing in a wide area distributed environment, b
ecause of its simplicity and flexibility in software programming. Howe
ver, up till now, distributed shared memory has mainly been studied in
a local environment. In a widely distributed environment, latency of
communication greatly affects system performance. Moreover, bandwidth
of networks available in a wide area is dramatically increasing recent
ly DSM architecture using high performance networks must be different
from the case of low speed networks being used. In this paper, distrib
uted shared memory models in a widely distributed environment are disc
ussed and evaluated. First, existing distributed shared memory models
are examined: They are shared virtual memory and replicated shared mem
ory. Next, an improved replicated shared memory model, which uses inte
rnal machine memory,is proposed. In this model, we assume the existenc
e of a seamless, multi-cast wide area network infrastructure - for exa
mple, an ATM network. A prototype of this model using multi-thread pro
gramming have been implemented on multi-CPU SPARCstations and an ATM-L
AN. These DSM models are compared with SCRAMNet(TM), whose mechanism i
s based on replicated shared memory. Results from this evaluation show
the superiority of the replicated shared memory compared to shared vi
rtual memory when the length of the network is large. While replicated
shared memory using external memory is influenced by the ratio of loc
al and global accesses, replicated shared memory using internal machin
e memory is suitable for a wide variety of cases. The replicated share
d memory model is considered to be suitable particularly for applicati
ons which impose real time operation in a widely distributed environme
nt, since some latency hiding techniques such as context switching or
data prefetching are not effective for real time demands.