Although an n-th order cross-entropy (nCE) error function resolves the inco
rrect saturation problem of conventional error backpropagation (EBP) algori
thm, performance of multilayer perceptrons (MLPs) trained using the nCE fun
ction depends heavily on the order of nCE. In this paper, we propose an ada
ptive learning rate to markedly reduce the sensitivity of MLP performance t
o the order of nCE, Additionally, we propose to limit error signal values a
t output nodes for stable learning with the adaptive learning rate, Through
simulations of handwritten digit recognition and isolated-word recognition
tasks, it was verified that the proposed method successfully reduced the p
erformance dependency of MLPs on the nCE order while maintaining advantages
of the nCE function.