D. Anguita et Ba. Gomes, MIXING FLOATING-POINT AND FIXED-POINT FORMATS FOR NEURAL-NETWORK LEARNING ON NEUROPROCESSORS, Microprocessing and microprogramming, 41(10), 1996, pp. 757-769
We examine the efficient implementation of back-propagation (BP) type
algorithms on TO [3], a vector processor with a fixed-point engine, de
signed for neural network simulation, Using Matrix Back Propagation (M
BP) [2] we achieve an asymptotically optimal performance on TO (about
0.8 GOPS) for both forward and backward phases, which is not possible
with the standard on-line BP algorithm. We use a mixture of fixed- and
floating-point operations in order to guarantee both high efficiency
and fast convergence. Though the most expensive computations are imple
mented in fixed-point, we achieve a rate of convergence that is compar
able to the floating-point version, The time taken for conversion betw
een fixed- and floating-point is also shown to be reasonably low.