In this paper we describe methods for mitigating the degradation in perform
ance caused by high latencies in parallel and distributed networks. For exa
mple, given any "dataflow" type of algorithm that runs in T steps on an n-n
ode ring with unit link delays, we show how to run the algorithm in O(T) st
eps on any n-node bounded-degree connected network with average link delay
O(1). This is a significant improvement over prior approaches to latency hi
ding, which require slowdowns proportional to the maximum link delay. In th
e case when the network has average link delay d(ave), our simulation runs
in O(root d(ave)T) steps using n/root d(ave) processors, thereby preserving
efficiency. We also show how to efficiently simulate an n x n array with u
nit link delays using slowdown (O) over tilde(d(ave)(2/3)) on a two-dimensi
onal array with average link delay d(ave). Last, we present results for the
case in which large local databases are involved in the computation.