"Efficient training algorithms for HMM's using incremental estimation" [1]
investigates EM procedures that increase training speed. The authors' claim
that these are GEM [2] procedures Ts incorrect. We discuss why this is so,
provide an example of nonmonotonic convergence to a local maximum in Likel
ihood, and outline conditions that guarantee such convergence.