Iterative simulation techniques are becoming standard tools in Bayesia
n statistics, a notable example being the Gibbs sampler, whose draws f
orm a Markov chain. Standard practice is to run the simulation until c
onvergence is approached in the sense of the draws appearing to be sta
tionary. At this point, the set of stationary draws can be used to pro
vide an estimate of the target distribution. However, when the distrib
utions involved are normal and the draws form a Markov chain, the targ
et distribution can be reliably estimated by maximum likelihood (ML) u
sing draws before their convergence to the target distribution. This f
act suggests that the normal-based ML estimates can be exploited to es
timate the mean and covariance matrix of an approximately normal targe
t distribution before convergence is reached, and that these estimates
call be used to define a restarting distribution for the simulation.
Here, we describe the needed technology and explore its relevance to p
ractice. The tentative conclusion is that the Markov-Normal restarting
procedure can be computationally advantageous when the target distrib
ution is nearly normal, especially in massively parallel or distribute
d computing environments where many sequences can be run for the same
effective cost as one sequence.