We discuss Hinton's (1989) relative payoff procedure (RPP), a static r
einforcement learning algorithm whose foundation is not stochastic gra
dient ascent. We show circumstances under which applying the RPP is gu
aranteed to increase the mean return, even though it can make large ch
anges in the values of the parameters. The proof is based on a mapping
between the RPP and a form of the expectation-maximization procedure
of Dempster, Laird, and Rubin (1977).