A generalisation of the ECM algorithm (Meng & Rubin, 1993), which is i
tself an extension of the EM algorithm (Dempster, Laird and Rubin, 197
7), can be obtained by replacing some CM-steps of ECM, which maximise
the constrained expected complete-data loglikelihood function, with st
eps that maximise the correspondingly constrained actual likelihood fu
nction. This algorithm, which we call ECME algorithm, for Expectation/
Conditional Maximisation Either, shares with both EM and ECM their sta
ble monotone convergence and basic simplicity of implementation relati
ve to competing faster converging methods. Moreover, ECME can have a s
ubstantially faster convergence rate than either EM or ECM, measured u
sing either the number of iterations or actual computer time. There ar
e two reasons for this improvement. First, in some of ECME's maximisat
ion steps, the actual likelihood is being conditionally maximised, rat
her than a current approximation to it, as with EM and ECM. Secondly,
ECME allows faster converging numerical methods to be used on only tho
se constrained maximisations where they are most efficacious. Illustra
tive ECME algorithms are presented with both closed-form and iterative
CM-steps, which demonstrate the faster rate of convergence and the as
sociated easier assessment of convergence. Also, theoretical expressio
ns are presented and illustrated regarding the rate of convergence of
ECME. Finally, relationships with Markov chain Monte Carlo methods are
discussed.