The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm w
here the expectation in the E-step is computed numerically through Monte Ca
rlo simulations. The most flexible and generally applicable approach to obt
aining a Monte Carlo sample in each iteration of an MCEM algorithm is throu
gh Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropoli
s-Hastings samplers. Although MCMC estimation presents a tractable solution
to problems where the E-step is not available in closed form, two issues a
rise when implementing this MCEM routine: (1) how do we minimize the comput
ational cost in obtaining an MCMC sample? and (2) how do we choose the Mont
e Carlo sample size? We address the first question through an application o
f importance sampling whereby samples drawn during previous EM iterations a
re recycled rather than running an MCMC sampler each MCEM iteration. The se
cond question is addressed through an application of regenerative simulatio
n. We obtain approximate independent and identical samples by subsampling t
he generated MCMC sample during different renewal periods. Standard central
limit theorems may thus be used to gauge Monte Carlo error. In particular,
we apply an automated rule for increasing the Monte Carlo sample size when
the Monte Carlo error overwhelms the EM estimate at any given iteration. W
e illustrate our MCEM algorithm through analyses of two datasets fit by gen
eralized linear mixed models. As a part of these applications, we demonstra
te the improvement in computational cost and efficiency of our routine over
alternative MCEM strategies.